Creating Stockpile Footprints in Topolyst

Several months ago, I introduced Topolyst, our small Unmanned Aerial Systems (sUAS) processing software.  One of the great features in Topolyst are tools to automatically create the footprint (“toe”) of a stockpile and to optionally classify overhead points so that they are excluded from subsequent processing (such as cross sections or volumetric computations).  An example of a stockpile with an overhead conveyor, prior to toe finding and classification, is shown in Figure 1.  As seen in the 3D view in the upper right, the conveyor simply blends in with the stockpile, giving a grossly inaccurate volume for this pile.

A typical stockpile with overhead conveyor

Fig 1: A typical stockpile with overhead conveyor

The data following Topolyst’s automatic stockpile extraction are shown in Figure 2.  Note the toe in the Map and 3D views as well as the automatic classification of the portion of the conveyor within the toe.  This is an extremely powerful tool available in Topolyst[1] (or in LP360 Advanced) that reduces the work of collecting stockpile volumes significantly.  Our initial release of Topolyst also includes a very powerful collection of 3D feature editing tools that make quick work of manually digitizing toes or cleaning up toes in difficult locations (for example, along pit walls) following automatic extraction.

Figure 2:  Automatically extracted stockpile with overhead classification

Figure 2: Automatically extracted stockpile with overhead classification

We have found, from completing many stockpile surveys, that correctly defining the toe is just the beginning!  Mine site operators are keenly interested in consistency.  For example, suppose a stockpile is measured on 5 January to have a volume of 1,000 yards3.  The plant manager sells 500 yards3 from this pile during the period up to the next survey.  She also estimates that 1,000 yards3 were added to the pile.  The next survey should indicate a volume close to 1,500 yards3.  If it does not, the person measuring the volume is the first suspect!

What are the causes of these discrepancies?  The first is, of course, poor estimation.  It is much more difficult to accurately estimate the volume of a pile by “eyeball” than one might guess.  However, we have found the primary culprit to be the definition of the base of the stockpile.

Many mine sites keep a priori survey data that represent the terrain prior to placing any stockpiles (“baseline data” or simply baselines).  Nearly all of the baseline data provided to us has been stereographically collected from a manned aerial survey.  An example is shown in Figure 3.  The magenta points are 3D “mass points” that were derived from a conventional photogrammetric stereo model.

Figure 3:  Baseline data (magenta points) superimposed on a shaded relief of the site

Figure 3: Baseline data (magenta points) superimposed on a shaded relief of the site

The question arises as to how to consistently employ these baselines?  There several approaches that one can take:

  • Get the mine site owner to agree to use the true surface at the time of data collection and abandon the use of “baseline” data. There is a lot of argument for this since it is seldom that the subsurface material will be used.  However, a big one time inventory adjustment may have to be made.
  • Use the 3D toes to define the vertical edge of a stockpile but pull down the base geometry using the baseline data
  • Generate a surface model from the baseline data and then use the toes to only define the planimetric placement of the stockpile.

The third method probably gives the most consistent change of volume record from survey to survey but is it the most technically correct?  This method assumes that all of the material from the toe to the baseline (recall that the baseline is actually under the surface on which the toe lies) could be extracted and used/sold.  This is usually not the case.

As mappers of data, it is important that we advise mine site operators of the advantages and disadvantages of the various methods but, at the end of the day, produce the data according to the customer’s instructions.

Topolyst supports all of the aforementioned techniques for computing volumes (as well as a few others).  For example, the hillshade of Figure 4 is a surface model constructed solely from photogrammetric mass points.  Topolyst has the ability to dynamically use these data as the base where computing volumetrics.  Topolyst also has the ability to generate a LAS file from point, polyline and polygon feature data.  This is extremely useful since this “baseline” LAS can be used in a wide variety of analysis scenarios.

Figure 4:  A surface model constructed from photogrammetric mass points

Figure 4: A surface model constructed from photogrammetric mass points

The features we are adding to Topolyst are being driven by our customer needs, our own needs within our analytic services group and by our research and development efforts aimed at process improvement.  I very definitely welcome your feedback on current and needed features in this great product.

[1] LP360 Advanced (standalone) is feature equivalent to Topolyst


AirGon Happenings

I am pleased to announce that AirGon’s request for amendment to its Section 333 waiver for flying commercial small Unmanned Aerial Systems (sUAS) was approved in April.  Our amendment adds all current and future 333 approved aircraft to our 333.  AirGon can now fly any sUAS that has ever been approved by the FAA as well as all future approved systems.  This list currently contains 1,150 different sUAS (AirGon’s own AV-900 is number 207 on the list).  This provides us a lot of flexibility in working with clients; for example, in situations where a glider sUAS is more efficient than a rotor craft.

The FAA has also recently streamlined the process of obtaining an N number for a sUAS.  Prior to the change, a paper process that required several months was the only option.  Now an online system is available, greatly simplifying this procedure.  Note that this is not the new online registration system for hobby drones but rather the system used for obtaining an N number for a manned aircraft (if you are confused, join the club!).  Combined with our new 333 amendment, we can now get a new aircraft legally operating within days.

We continue to do a lot of work to optimize the accuracy of point clouds derived from dense image matching (DIM).  DIM are the data of choice for sUAS mapping since they can be generated from low cost prosumer cameras using standard application software such as Pix4D Mapper or PhotoScan.  The question always remains as to how good these data really are.

It has taken us a lot of experimentation and analysis but we think we have fleshed out a procedure for assuring good absolute vertical accuracy.  It involves the use of Real Time Kinematic (RTK) Global Navigation Satellite System (GNSS) positioning on the sUAS, a local base station that we tie into the national Continuously Operating Reference Station (CORS) network and the National Geodetic Survey’s Online Positioning User Service (OPUS) to “anchor” the project to the network.  We have also discovered that high vertical accuracy cannot be obtained without camera calibration.  We typically use an in situ process for calibration.  We have flown many dozens of sites (primarily mining), giving us a rich set of test data.

I cannot over emphasize how critical network vertical accuracy is.  Most customers want elevation maps of their sites.  These are usually delivered as contour vector files.  As we all know, a 1 foot contour requires vertical accuracy of 1/3 of a foot.  This is a very tight requirement!  A three inch vertical bias error over an acre is an error of about 400 cubic yards – this is significant.

We see a lot of drone companies processing site data with no control and no RTK/PPK.  While, with the introduction of scale into the model (many companies do not even do this), one might obtain reasonable difference computations (such as volumes), the network accuracy is very poor (obtained from the airborne navigation grade GNSS only) and hence the data are of limited use.  We have discovered that these techniques (where no control and/or RTK/PPK is used) can result in the vertical scale being incorrectly computed.  This means that even differential measurements are not accurate.  Why spend all of the money to collect these data if they are of unknown accuracy?

A more difficult area that we have studied over the past several years is what I refer to as “conformance.”  That is, how well does the DIM actually fit the object being imaged?  DIM processing software (again, such as Pix4D and PhotoScan) do a miraculous job correlating a 3D surface model from highly redundant imagery using the general class of algorithm called Structure from Motion (SfM).  In addition to the obvious areas where SfM fails (deep shadow, thin linear objects such as poles and wires), a lot of subtle errors occur due to the filtering that is performed by the SfM post-extraction algorithms.  These filtering algorithms are designed to remove noise from the surface model.  Unfortunately, any filtering will also remove true signal, distorting the surface model.

We are working with several of our mining customers to quantify these errors and, once these errors are characterized, to develop best practices to minimize or at least recognize when and where they occur.  An example of an analysis is shown in Figure 1.  Here we are analyzing a small pile (roughly outlined in orange) of very coarse aggregates with a volume of about 340 cubic yards.  This site was flown with a very high end manned aircraft LIDAR system and with AirGon’s AV-900 equipped with our RTK system.  The DIM was created using Agisoft PhotoScan.  We obtained excellent accuracy as determined by a number of signalized (meaning ground targets visible in the imagery) control and supplemental topo only shots.  We used in situ calibration to calibrate the camera (a Sony NEX-5 with a 16 mm pancake lens).

As can be seen in Figure 1, we created a series of cross sections over the test pile.  These cross sections were generated using the Cross Section Point Cloud Task (PCT) in LP360/Topolyst.  This tool drapes cross sections at a user specified interval, conflating the elevation value from the user specified point cloud.  We ran the task twice, conflating Z first from the LIDAR point cloud and then from the DIM.   In Figure 1 we have drawn a profile over one of the cross sections with the result visible in the profile view.  The red cross section is derived from the LIDAR and the green from the DIM.

Comparing LIDAR (red) to DIM (green)

Comparing LIDAR (red) to DIM (green)

Note that the DIM cross section (green) is considerably smoother than the LIDAR cross section (red).  This is caused by several factors:

  • The aggregate of this particular pile is very coarse with some rocks over 2 feet in diameter. This leaves a very undulating surface.  The LIDAR is fairly faithfully following this surface whereas the DIM is averaging over the surface.
  • The AV-900 flight was rather high and the data was collected with a 16 mm lens. This gave a ground sample distance (GSD) a little higher than is typical for this type project.
  • Due to the coarseness of the aggregate, significant pits appear between the rocks, creating deep shadows. SfM algorithms tend to blur in these regions, rendering the elevation less accurate than in areas of low shadow and good texture.

The impact of lower conformance is a function of both the material and the size of the stockpile (if stockpiles are what you are measuring).  For small piles with very coarse material (as is the case in this example) a volumetric difference between LIDAR and SfM can be as great as 20%.  On larger piles with finer aggregates, the conformance is significantly better.   For example, in this same test project, we observed less than 0.25% difference between LIDAR and the DIM on a pile of #5 gravel containing about 30,000 cubic yards.

There still remains the question of which is more accurate – the volume as computed from the LIDAR or the volume as computed from the DIM?  I think that if the LIDAR are collected with a post spacing ½ the diameter of the average rock, the LIDAR will be the most accurate (assuming that it is well calibrated and flown at very low altitude).   However, the DIM is certainly sufficiently accurate for the vast majority of aggregate volumetric work, so long as a very strict adherence to collection and processing best practices is followed.  For most high accuracy volumetric projects, manned LIDAR flights are prohibitively expensive.

We continue to do many experiments with local and network accuracy as well as methods to improve and quantify conformance.  I’ll report our results here and in other articles as we continue to build our knowledge base.

January 2016

Well, first of all, Happy New Year! May you have a happy and prosperous 2016!

As I mentioned in the December 2015 issue of our newsletter, we are streamlining and focusing our business this year. One of the things we are changing is this newsletter. We were trying to reach way too many disparate audiences and not being effective with any, I fear. We included general industry information related to the geospatial arena, aimed at decision makers. In the same issue, we included tool tips for LP360. This is just too broad.

We have decided to make this a newsletter for our user community (hence the change in name). We have reduced the distribution list to members of organizations who own or subscribe to our products and/or services. We will now be focused on useful information about our tools, consulting services, hosted solutions and the like. We will generally keep our content organized by:

LIDAR Production Solutions – Tools and services for folks who collect kinematic LIDAR data and do primary data processing. Within our solution set, this includes:

  • The GeoCue production software suite
  • Terrasolid products

Point Cloud Exploration Solutions – Tools and services for users who exploit LIDAR point clouds and who collect/exploit point clouds from imagery. Within this solution set are:

  • LP360 in its various incarnations
  • LIDAR Server
  • Pix4D Mapper
  • Agisoft PhotoScan

AirGon – This is the area where we are focused on our CONTINUUM solutions for executing complete small Unmanned Aerial Systems (sUAS) metric mapping missions. Technologies included in this solution area include:

  • AV-900 Metric Mapping Kit
  • Reckon
  • AirGon Sensor Package (our RTK/PPK solution)
  • AirGon mine site mapping services

Of course, there is a lot of cross talk between these solution areas. LP360 is often used by LIDAR production companies both for specialized tools such as breakline digitizing as well as managing LAS 1.4 generation and quality checking. The point cloud tools within our Point Cloud Exploration area are key to the AirGon solutions and so forth.   We do a fair bit of custom development services that relate to our key technologies as well as solution-specific consulting services. These tend to span multiple solution areas.

We will move our general marketing (where we are trying to get new customers interested in our technology) out of GeoCue Group User News and move toward channels such as LinkedIn, general advertising and so forth. This will allow us to provide much more specific value to you, our users.

I am very excited about 2016. We have been doing a lot of product and solution planning focused on the above areas. You, our users, will benefit from solutions that are clearly focused on our target areas and offer best of breed technology.

I wish you a good start to 2016!

October 2015

A special thanks to our customers who attended the LP360 software training that we held at our offices in September. As this core group of customers can attest, a few days invested in training on the latest features and techniques can save weeks of time in production and analysis. I think we all particularly enjoyed the evening social at the Blue Pants Brewery!

Several of us have just returned from a whirlwind three weeks on the road. We attended the American Society for Photogrammetry and Remote Sensing (ASPRS) unmanned aerial system conference in Reno, Nevada at the end of September. We conducted (along with Dr. Qassim Abdullah of Woolpert) the UAS Workshop on the day prior to the conference. We had over 110 participants so the interest in sUAS mapping is only growing.

We next attended the inaugural Commercial UAV Expo hosted by Diversified Communications (the folks who bring you ILMF and SPAR) in Las Vegas, Nevada.   This show had well over 100 exhibitors and about 2,000 attendees. We presented a paper on some of the practical aspects of stockpile volumetrics (sort of a lessons learned overview). I was pleasantly surprised at the number of potential end users who attended this conference. We were constantly busy at our booth discussing mine site mapping with quarry and stockpile owner/operators.  Hopefully it was a mere coincidence but the booth next to ours was a company selling automatic parachutes for multi-rotors!

Many companies who are using point clouds extracted from camera carrying drones are realizing that workflow tools beyond those supplied within the point cloud extraction software are needed to efficiently extract products. We have been seeing a nice uptake of LP360 for sUAS by this set of production companies. Our 2015.1 release (by the end of October, I promise!) includes a few new tools such as an automatic stockpile toe extractor that really speed up these processes.

We have been very heavily involved in collecting mine site surveys using our AV-900 sUAS platform. These engagements have been very enlightening in terms of informing us of the tools that can really make a difference in this type of work. One thing we have paid particular attention to is the frequency with which we are denied access to site areas for placing survey control. Fortunately we have our initial version of a Real Time Kinematic (RTK) positioning system on the AV-900 (we actually use this in Post-Processed Kinematic mode). This allows us to collect mine site data with no control at all (we usually do place some checkpoints to verify accuracy). We have come to realize that this is not a nicety for mine site surveys but rather a necessity.

On the LIDAR front, the USGS 3DEP program continues to gain momentum with a number of new projects underway. An interesting aspect of 3DEP is that the deliveries are required to be compliant with the ASPRS LAS 1.4 format. Both GeoCue and LP360 have been compliant with LAS 1.4 for some time now and offer workflows to realize these delivery requirements.

As we move well into the fourth calendar quarter of 2015, we are heavily engaged in product planning for 2016. I see a continued uptake in the use of small unmanned aerial systems for local area surveys and hence we will continue our rapid pace of tool development for this market. LIDAR continues to be a major data source for base mapping with ever increasing expectations on data density and accuracy. We intend to keep LP360 at the forefront of technology for processing and deriving value from these data. I see cloud based services as a technology that promises to provide a means of controlling capital expenditures as data densities expand. While data transfer speeds remain a problem (e.g. they are much too slow), we are developing some clever ways to use hybrid deployments to reduce this impact.

Until next time, enjoy some fine fall weather!

Accuracy, Precision and all that

I was recently at a Transportation Research Board subcommittee meeting where we were discussing accuracy and precision (no one used the word “resolution”). After listening for a bit, I realized a sense of Deja vu. It was a TRB meeting several years ago that inspired me to write an article for my column in LIDAR News (“Random Points”) on this subject. I am repeating the essence of that article here since there is a follow-on discussion that requires this foundation.

There is a lot of argument out in the data acquisition community surrounding these topics. It nearly always comes up in arguments about why a particular vendor’s aerial data is better than that stuff on Google Earth®. There is also the old saw – “bad data is better than no data.” Of course, depending on your use, you better be able to quantify how bad and in what ways.

It’s no wonder that a lot of confusion exists over quantifying accuracy. Every time I have been in a room full of experts, we argue about the specific meaning of the terms. Since I have the floor here for the moment, we’ll go with my descriptions! A caveat however – this article is meant to provide a bit of insight. It is not a vetted technical article and thus you should use my descriptions and analogies with a lot of caution.

We will specifically look at geopositional accuracy as opposed to other accuracy issues such as attributes (i.e. is the ‘color’ attribute correct?). For a more detailed look at geopositional measurement, I think the Washington State DOT “Highway Surveying Manual” is an excellent read (easily found on the web). On the other hand, I find the FGDC standards very dry and light on explanation.

To discuss the geopositional quality of data, I think you need to fully understand the following terms:

  • Network Accuracy (often called “Absolute Accuracy”)
  • Local Accuracy
  • Precision
  • Resolution
  • Density

There is no other way to do this than to just jump in so here we go!

Pick up a ruler and look at it. The fineness (spacing) of the tick marks is the resolution. Similarly, if you have a digital volt meter, the number of digits in the display determines the resolution (it’s a bit more detailed than this but this is close enough for our purposes). Note that this parameter has nothing to do with ‘precision’ or ‘accuracy.’

Precision is a measure of the repeatability of a measurement under identical environmental circumstances (meaning, for example, that if you made repeated length measurements with a steel tape over a number of days where the temperature varied, you would violate the ‘identical environmental conditions’ restriction). It always speaks to repeating the same measurement multiple times. Since in LIDAR and imaging work, we very seldom do repeated measurements, this is perhaps the most misrepresented term in our work. Here is a simple experiment that illustrates precision. Take a tape measure and measure the height of a door. Now, using the exact same measurement spot, tape and technique, repeat this 9 more times. Write down your readings to the highest level of resolution supported by your tape (remember resolution?). The range of your readings gives you a measure of the precision, not only of your device (the tape) but your system (where you place your eye each time, how close you are at measuring the same spot, how hard you pull on the tape and so on). The assumption here is that you are measuring some constant object so that variation is solely due to you and your device, not the object being measured. In reality, this may or may not be the case! Now those of you who have studied basic statistics know that if you repeat these measurements enough times (say 30), a plot of the results will produce the ubiquitous Normal (Gaussian, bell, etc.) curve. Precision is statistically quantified as variance (or standard deviation). Now notice that a tape made of steel and a tape, with identical ticks, made of rubber, will have the same resolution but radically different precision.

I hope that you notice that we still have not touched on accuracy. For example, suppose we did our experiment of making 30 repeated measurements of the height of a specific spot on a door with an uncalibrated (more on this later) tape having a resolution of 0.001 meters. Suppose we came up with an average measurement value of 2.000 m, a largest reading of 2.002 m and a smallest reading of 1.097 m (for you statisticians, let’s say we have a standard deviation of 1.8 mm). What can we say at this point? Well, the resolution of our tape is simply a given (we will ignore fudging resolution by linear interpolation). Our measurement precision is quite “good” with a maximum deviation of only 3 times the resolution of our tape. However, we cannot say anything at all about accuracy!

Here’s the problem. I can just go in to my workshop and whack off a piece of electrician’s steel fish tape. I can mark it with measurements (OK, this would be pretty tedious, I agree!) by just eyeballing. and voilà, I have a steel tape! It would be quite precise if I did not subject it to temperature variations during my sequence of 30 measurements. However, it would, no doubt, be quite inaccurate when compared to a known length. And this is key – you cannot make a judgment about accuracy without having a ‘standard’ to which you are making comparisons.

In geopositional work, we are concerned with two types of accuracy. Network Accuracy (which I usually call absolute accuracy but this is a really loose term) talks about how closely your measurements match a known external reference system (what we call a ‘datum’). Local Accuracy (also often called relative accuracy) deals with the accuracy of the measurement of metric units with respect to a standard. By this we mean if you measure a length or an area, how ‘close’ are you to the true value? Note that you can very accurately measure the distance between two fixed points yet be clueless as to the location of the points relative to some outside reference system (again, the ‘datum’). This is the case with our measurement of the door. If I calibrated my tape by comparing to a ‘standard’ meter, I could then use it to very accurately and precisely (the precision coming from my construction of the tape as verified in my repeated measurements experiment) measure the height of the door. Yet I still would have no idea of where in the ‘world’ the two end points of my measurements were located. This is an example of very good relative (local) accuracy yet very poor network (absolute) accuracy.

So finally we are left with the term, density. This parameter is not related to accuracy, precision or resolution. In LIDAR work, it would be the number of points per unit area. In imagery work, it would be the number of pixels per unit area. Note that this is often called ‘resolution.’ In imagery work, if using an array sensor, it may be roughly synonymous with resolution. When using scanning LIDAR systems, it is seldom synonymous with resolution. Am I splitting hairs here? No, not at all. If you followed the above discussion, you will realize that precision is based on a quantum number and the size of the quanta is the resolution, not the density (or point spacing). In looking back over this paragraph, I have confused even myself! Basically what I am saying is that it is entirely possible to limit the scanning density of a LIDAR system to roughly 1 point per meter yet have an available horizontal resolution of a few centimeters.

Figure 1 provides a nice physical representation of these terms. Note that resolution would be the width of the target rings. Here we are imagining that the bulls eye represents a known location in our datum (maybe we placed some rings around a National Geodetic Survey monument and are taking pot shots with the old Winchester!).

Acc and Precisoin, fig 1

Figure 1: Accuracy, Precision, Resolution

A figure that I lifted directly from Wikipedia (Figure 2) provides a more statistical view of accuracy versus precision. Note here that the distance of the mean (average) of the repeated measurements speaks to the accuracy whereas the ‘spread’ of the measurements (variance) speaks to the precision.

Accuracy, fig 2


Figure 2: A more statistical view of Accuracy and Precision

With this foundation in terminology, we will address how these factors play in to LIDAR and other point cloud data – stay tuned.