Internalizing LIDAR Data Processing

At GeoCue Group we are involved with customers across the mapping industry, from hardware designers through data collators and data analysts to end users, so we often get asked the question, “how much data processing should I do myself?”  It is a great question.  How much of any given business process you decide to internalize must be a key part of your overall growth strategy.  Unfortunately, we often see companies making one of two classic mistakes when approaching this question about bringing LIDAR data processing in-house.   

The first mistake is to decide to do something just because you can, and being smart engineers and scientists, we all believe we can do LIDAR data processing!  In fact, from a practical point-of-view, this is probably very true.  Most engineering, survey and mapping firms have the technical capability and skills already on staff, or can acquire them by hiring experienced people, to take on LIDAR data processing.  LIDAR data is no more complex than many of the other geospatial data types companies routinely process in-house.  It has some unique aspects to it, but the workflows, tools and techniques are very teachable and can be learned, although there is no substitute for experience.  But, just because you can do a thing does not mean you should do that thing.  For LIDAR data processing, a compelling business case must exist to justify internalizing the process.

Let’s consider the case of a company that is currently subcontracting out all their LIDAR data production.  Typically, they will be receiving geometrically correct, fully classified point clouds as a deliverable.  There are usually two questions such companies ask when looking at what, if any, of that work would be better done internally. First, do we want to and can we afford to get into the data collection business by buying hardware? Second, if we don’t buy a sensor and continue to pay somebody else to collect, how much of the data processing should we do ourselves?  The hardware question is usually driven by larger business considerations than we are discussing here, given the level of capital investment required.  There is also a clear difference between taking on work that involves field data collection and all the logistics that go along with those activities and taking on what is essentially another back-office data processing workflow.  We usually recommend that if you aren’t already doing field work, don’t decide to get into it by starting with LIDAR data collects.  But what to do about the back-office data processing is always an interesting question for any company.  The advantages of bringing LIDAR data processing in-house are often characterized in terms of cost-savings – our subs are charging us way too much! – and schedule control – our subs are always late!

The cost-saving argument can be a strong one, but it requires careful analysis.  When we discuss standing-up a LIDAR data production team of three to five staff, we recommend companies allocate an estimated $65,000 to $95,000 for software licenses, classroom training and updating their IT infrastructure.  The minimum investment, for the smallest operations (single technician, existing IT hardware, limited training) still is going to be in the $20,000 – $25,000 range.  The annual lifecycle cost to maintain this capacity likely will run around 20% per year covering software maintenance, support, and annual training.  So, the five-year capital investment for our 5-person team is going to be around $175,000 or approximately $35,000 per year.  The labor costs are going to be the big variable cost; if you have enough work to keep your new production team busy full-time doing LIDAR data processing, the salary and overhead for a five-person team for the year likely will be significantly larger than your actual capital investment in the software tools.

Unfortunately, it is here that many companies get side-tracked.  They see the large up-front capital investment required for the software and training and struggle to get over that hurdle – because usually someone must be convinced to sign an actual purchase order for this amount! – even though in the long run it is likely the labor costs that will determine the profitability of the venture, not the initial set-up costs.  We often hear from companies that want very detailed breakdowns on pricing and technical capabilities of the software to support their business case but can’t tell us exactly how many people they plan to have working on the data processing or what the annualized labor burden will be.  They focus too much on the software price and not enough on putting the software investment in the context of an overall business case.  Ultimately the actual financial determination in this case is straightforward; if the company is paying more than $35,000 + X per year (where X is the organization’s labor burden based on their projected workload) for LIDAR data processing, they can save money by bringing that data processing in-house.

Control of the data processing, especially schedule control, is the other common justification for internalizing LIDAR data processing.  However, our experience has shown this is often a red-herring.  Poor performance on past projects is more likely to indicate a problem with the choice of subcontractor rather than a process issue.  There is nothing that internalizing LIDAR data processing will do to improve upon best practices. If you do decide to internalize, getting trained on best practices is critical!  We work with the best LIDAR data producers in the world and by applying best practices, being rigorous about workflow management and applying constant quality improvement, they all produce great products on time and on budget.  We firmly believe any company that is willing to invest in the proper software tools and well-trained people can achieve the same results by internalizing the process.  Controlling the data processing does offer the potential to build efficiency improvements into your processes over time that can help reduce delivery schedules, but any credible subcontractor will be doing the same and passing those savings on to their customers anyway.

The second common mistake we see companies make in building their business case for internalizing LIDAR data processing is to delay full implementation or adopt a slow rollout strategy.  LIDAR data processing is one of those activities that benefits greatly from economies of scale and “doing the work.”  Achieving a critical mass of expertise on staff and having a constant workload is very important to a successful internalization program.  Having a plan where staff will work on LIDAR part-time or only at certain times of the year or only on a certain customer’s projects is usually a very high-risk choice.  Even if financially the business case appears strong, we often caution customers that if they aren’t going to truly prioritize LIDAR data processing as a core competency and build a sustainable pipeline of work from Day 1, they may be better off staying with a subcontractor.  Often rather than slowly ramping-up to a successful deployment, they end-up slow-walking down a dead-end path that leaves them with only a bare minimum of internal capability, though having invested heavily in the software tools and training.  In the worst-case scenario, these are the companies that we see exit the LIDAR data processing business after 18-36 months with little to show for their investment.  The best way to mitigate the risk of a stalled or under-utilized deployment is to avoid a piece-meal deployment plan; if the financial business case for internalizing LIDAR data processing is there, then be aggressive!

Get the PDF – Internalizing LIDAR Data Processing


GeoCue Launches to Support Hurricane Recovery

When natural disasters occur, one of the more pressing needs of disaster recovery teams is access to trust-worthy, pre-event data. Typical needs are for recent aerial images and elevation data, preferably orthoimages and LIDAR data. It can be difficult to find sources for this data, sources that can be easily accessed from any location, trust worthy with respect to data integrity and accuracy, and which provide a simple, straightforward interface to extract and deliver data to local computers for processing. provides such an access point for data relevant to the areas in Florida, Texas and Puerto Rico heavily damaged by hurricanes Harvey, Irma and Maria. provides free and direct access to pre-event imagery and Lidar data including:

  • LIDAR data in both American Society for Photogrammetry and Remote Sensing (ASPRS) LAS format as well as compressed LAZ format.
  • 50 cm orthophotography in US Geologic Survey (USGS) quarter quad format for the Harris county area, provided by the Texas Natural Resources Information System (TNRIS). Collected on behalf of the Houston-Galveston Area Council (H-GAC)
  • 2004 USACE LIDAR and 2015 National Oceanic and Atmospheric Administration (NOAA) NGS Topobathy LIDAR data in LAS or LAZ format from NOAA for Puerto Rico.
  • US Department of Agriculture National Aerial Imagery Program (NAIP) for all areas
  • Landsat 8 data for all areas

Read Complete Article: GeoCue Launches to Support Hurricane Recovery

The Secret is Out

By Ashlee Hornbuckle

I have been tasked with keeping a secret over the past few months—which is sometimes difficult for me, as I have trouble remembering what has been told in confidence.  However, Lewis told me a secret so big, I almost blabbed for the mere fact of how utterly exciting it is.  Are you ready for it?

AirGon has developed a direct geopositioning Post-Process Kinematic (PPK) system for the DJI series Inspire 2 and Phantom 4 Pro drones!

Dubbed LOKI, this PPK system is a third generation AirGon design that uses the latest Septentrio GNSS engine, the AsteRx-m2.   The m2 is a triple band GNSS engine, supporting NAVSTAR GPS L1/L2/L5 and GLONASS L1/L2/L3 and sporting 448 hardware channels.  The GeoCue engineers tell me this is the most advanced UAS class receiver on the market today.

LOKI is self-contained and uses an internal battery (charged via a USB port).  It has been designed to survive most crashes and easily can be moved to a new, replacement drone.  LOKI interfaces to the DJI series drones by simply plugging a personality cable into the DJI drone SD card slot, making it a user installable “plug and play” system.  We use a patent-pending set of hardware and firmware algorithms to figure out when the camera is triggered.  Should users elect to use a higher end drone with a DSLR camera, the LOKI system can be moved by simply using a DLSR personality cable.

LOKI provides a huge advantage over using drones without RTK/PPK.  Without direct geopositioning, dense ground control is required to achieve the accuracy necessary to calculate differential earth works volumes and to create 2’ (60 cm) or closer contours. This can be extremely time consuming and sometimes a safety issue. In fact, on some sites ground control cannot be placed due to restrictions to site access. However, joined with the BYOD Mapping Kit and a base station, users can expect around ⅛ foot horizontal and ¼ foot vertical accuracies with no ground control placement.

The system is scheduled for release in late July/early August. Please contact us at for more information on LOKI and the BYOD Mapping Kit.

Terrasolid: The Workhorse is Still a Valuable Tool in LIDAR Production Shops

As the North American reseller for Terrasolid’s software suite, we get to work with the majority of the LIDAR production shops in the US and Canada.  The Terrasolid suite – TerraScan, TerraModeler, TerraMatch and TerraPhoto – continues to be common-place on the production floor regardless of the type: airborne, mobile or terrestrial.  And increasingly we see UAV operators deploying Terrasolid to assist with their own point cloud workflows, whether LIDAR or imagery based.  The focus of the industry is often on the what is new and different and exciting, on the “latest and greatest” so this week we thought we’d step back from the hype and hoopla and check-in with a long-time user of Terrasolid to see how this old workhorse of the LIDAR production shop is doing these days.

We spoke with Amar Nayegandhi, Vice President of Geospatial Technology Services at Dewberry.  Dewberry has been using LIDAR commercially since 1998 – yes, 1998; Dewberry received the first LIDAR task order from the USGS under the Cartographic Services Contract (CSC) – and is well-known and well-respected in the industry.  GeoCue Group sold our first seat of GeoCue and Terrasolid software to Dewberry more than 10 years ago back in 2007.  Dewberry is also a major user of our LP360 software along with many other commercial software tools that are available on the market; basically, they know their stuff when it comes to LIDAR software.

What is the biggest benefit you get from using Terrasolid in your business?

One of the biggest benefits of Terrasolid software is we can integrate the entire LIDAR workflow into our MicroStation CAD environment.  Our geospatial and engineering professionals have a very good understanding of the CAD environment, which enables us to perform point cloud processing (TerraScan), surface modeling (TerraModeler), and sensor calibration (TerraMatch) directly in the CAD environment.

Of the four modules, TerraScan is the primary point cloud analysis tool; where do you see it helping you the most?

When we first started working with LIDAR data, just being able to load millions of points into our CAD software was a challenge that TerraScan solved for us.  Now data sets are in the billions of points and expectations of basic point cloud functionality has evolved with the times.  Still, the core functions we use TerraScan for haven’t changed much over the years – our biggest benefit is the automatic bare earth filtering using our proprietary macros developed through years of experience in processing LIDAR data in various environments. Some of the newer tools in TerraScan like Groups for spatial object classification or newer surface classifications for pulling ground from noisy UAV data are really helping as well.  Project and data management tools are also big time-savers we often take for granted.

After TerraScan, what module do you find the most critical for your production?

Probably TerraMatch.  Sensor manufacturers have come a long way in having calibration and geometric correction built right into their pre-processing software, but TerraMatch gives us the ability to independently verify and correct the fit of the data.  We often use TerraMatch to calibrate data in a project area that include multiple “lifts” because sensor-manufacturer software does not always produce the best fit over lifts that have variable GPS/IMU trajectory solutions. It is also vital for working with older data sets or subcontractor-provided data where we may have no visibility into the calibration processor – TerraMatch gives us an independent verification of goodness of fit.  For mobile LIDAR data, with the GPS outage concerns and other aspects particular to driving around in a car as opposed to flying over in an aircraft, having a set of tools like TerraMatch for calibrating the laser scanners and the cameras is absolutely mandatory.

Dewberry is a major LIDAR production shop in the US, certainly one of the biggest.  That is a lot of staff and over the years, staff turnover is inevitable.  How do you find the learning curve for Terrasolid for new users?

Well, like most engineering software, there are many, many buttons to learn and concepts to get straight in your head.  We are processing more than 100,000 sq miles of LIDAR data this season, and though we don’t see a lot of turnover in staff, our staff has almost doubled in the past two years due to increased workload. So, we do face this issue of training our new staff, not just in Terrasolid, but also in understanding our entire production workflow. We have noticed that most new users come up-to-speed pretty quickly as we have them undergo an intensive one to two weeks of training and practice immediately after they are hired.  It’s a huge plus if the new hires are already comfortable with the MicroStation environment.  I would say a new user is productively working unsupervised after 30 days.  They won’t be using the power tools or doing the complex workflows such as developing macros, but they will be productive with the basics like doing a bare earth extraction and editing the point cloud.  And one of the hidden advantages of Terrasolid is that, unlike 10 years ago, you can find many candidates in the employment pool with significant hands-on Terrasolid experience already.

Do see an alternative or any new contenders you might want to incorporate in your production to replace Terrasolid?

Well, we do keep an eye on alternative software, and we do have other tools in our shop, which we use extensively; but for now we see no benefit to changing our workflow where we use Terrasolid.   With our investment in the suite of bare-earth extraction macros developed by our analysts for various types of data densities, sensors, vegetation, above-ground features, and terrain, as well has the new and interesting features added regularly to the Terrasolid suite, we believe that Terrasolid is reliable, robust and just works to do what we need it to do.

What’s the most interesting or unusual feature in Terrasolid you personally haven’t had a chance to use but would like to?

TerraStereo?  Viewing point clouds directly in stereo seems like it might have some interesting benefits.

Drone Mapping – Business Models Revisited

I am currently attending the 2017 NSSGA/CONEXPO exposition.  One of the keynotes from the National Stone, Sand and Gravel Association (NSSGA) conference focused on the rate of change of technology in the mining industry and the scope of operations that are covered by these technologies.  Of course, one of the examples was the use of drones.  The gist of the discussion was that some of these technologies are in their formative stages; we do not yet fully appreciate the scope of operational affect they will have but to prosper, knowledge of these systems must be internalized.

One thing is very clear – frequent and repetitive mapping will be required to support the automated machinery that is now appearing on advanced sites.  You cannot program a haul truck for autonomous operations if you do not know the location of the road!  Complicating this issue is the fact that the road location changes nearly daily due to the operation itself.

This future trajectory says that mine site mapping will need to become an internal operation.  It will be impractical from both a logistics and cost perspective to outsource drone mapping services.  A second strong consideration is the rapidity with which drone technology is changing.  I think amortizing the cost of a drone over more than 12 months is just not realistic.

Drones are simply platforms for cameras and other sensors (for example, profilers, laser scanners and so forth).  A drone without a sensor is a fun toy to fly but it is not going to have much use in operations!  I am very excited about new platforms from commercial drone companies (mostly DJI).  These new drones include decent cameras in that they now incorporate larger sensors and hybrid shutters.  You can do a reasonable job of mapping with these yet still use them for inspection videos.

DJI Inspire

So I think what we are seeing is the beginning of the end of the purpose-built drone.  You will be able to purchase drones from DJI (and perhaps others) that are nearly a consumable.  You can use the same drone for inspections as you use for mapping.  This is a very important consideration since this greatly simplifies the training of users.

The bottom line here is this – we are seeing the beginning of drones as an everyday tool for mining, industry and construction.  The proper model is going to be internal control of not only flying the systems but also processing the data.  When you need a quick check of a pulley on a conveyor, you will want an internal staff member to quickly fly the inspection job and post the resultant video.  No need to have a third-party system or contractor involved.  It just complicates the flow and adds expense.  This is really the motivation behind our Bring Your Own Drone (BYOD) Mapping Kit.  It lets you use a low-cost drone such as the DJI Inspire to do serious mapping without a lot of complicated leasing or outsourced data processing arrangements.  It also allows you to use the same platform for inspection that you use for mapping.  Give us a call to see how well this solution will meet your specific needs.

AirGon Partners

We spent a lot of time in November and December of last year (2016) developing a coherent strategy for our AirGon business. As you know from prior newsletters, AirGon LLC is our small Unmanned Aerial Systems (sUAS) subsidiary. We have been developing technology for the past three years aimed at implementing and improving sUAS (or, more commonly, drone) high accuracy mapping. Our focus has been in four major areas:

  • Hardware for RTK/PPK grade geopositioning (the AirGon Sensor Package)
  • Software tools for data processing (Topolyst)
  • Reckon, our Amazon Web Services (AWS)-hosted data management and delivery portal
  • Workflow best practices for project repeatability
  • Production Services for customers who do not want to do their own processing

Addressing the sUAS market is a new challenge for us. There is a surprisingly small overlap between our traditional LIDAR/Photogrammetry marketplace and the new drone business. After a few years in the trenches and hundreds of mapping projects, we are rationalizing our business into three different Partner categories. These are delineated by the type of customer:

Technology Partners – These are customers who purchase technology from us to either use for their own internal operations or to offer services. The technology in our portfolio related to sUAS mapping includes:

  • PhotoScan and Pix4D point cloud generation software
  • Topolyst, our purpose-built point cloud exploitation tool for data from sUAS Laser Scanners (LIDAR) and/or data from dense image matching
  • Bring Your Own Drone (BYOD) Mapping Kit, a collection of software that enables serious mapping with a variety of third party drone hardware from low cost DJI Inspires to professional grade senseFly (eBee) fixed wing drones.
  • Reckon, our Amazon Web Services-hosted site data collaboration and delivery portal. Reckon is a subscription product that allows web-based collaboration between the service provider and end user (who may be one and the same)
  • Various hardware components
  • Consulting services, tailored to needs

Network Partners – The AirGon Network program is an emerging part of our AirGon business. It comprises drone mapping services experts who use our technology for data capture, processing and delivery. Network Partners always interact with their AirGon Network client base via Reckon. We offer regular best practices training, exposure to end-use customers and referrals. We can also provide data processing services to those who wish to focus only on flying. This is a program that requires qualification.

Enterprise Partners – These are end use customers of drone mapping services. An AirGon Enterprise Partner can be as small as a single stockpile yard to as large as a multi-national mining company. Enterprise partners generally engage with us via our CONTINUUM concept, a model that allows a customer to tailor a drone mapping solution that exactly fits their desired business model. For customers who wish to do their own data collection, we offer subscription-based back office processing services. For customers who want to outsource data collection and processing, we link Network partners who are the best match for the desired services and locations.

Please get in touch with use ( if you are serious about high accuracy drone mapping – we would love to work with you!

What Miners Want

I attended the Commercial UAV Expo in Las Vegas at the end of October.  I gave a talk entitled “Mine Site Mapping – One Year In.”  This talk was on our experiences with performing mine site mapping services with our AirGon Services group.   Our services group is primarily about Research and Development (R&D).  We use our engagements with mining companies to discover the products that they need, accuracy levels and, most of all, how to reliably create these products.  These experiences inform both the development of our technology (the MMK, Topolyst, Reckon, the BYOD Mapping Kit) but also help us develop best practices for both collection and processing.

As I prepared for this presentation, I reviewed the mine site mapping projects we have performed over the past several years to tabulate the products our customers have requested.  These turned out to be, in decreasing order of popularity:

  • Site Volumetrics with a priori base line data
  • Site Volumetrics with no prior data
  • Site contours (“topo”) – 2 foot interval
  • Site Contours – 1 foot interval
  • Time series volumetrics (“borrow pit”)

In every case, the customer desired a site orthophoto.  In fact, they usually want an ortho of the entire site with analytic products of a subsection of the mine site.

I thought in this month’s section, I would review these products from the acquisition and processing point of view.

 Volumetrics with baseline data

I have written a few articles about injecting a priori data into a mapping project.  This is the situation where, at some time in the past, the customer has done a site survey and wants to use these data as the bottom surface of stockpiles.  Their primary desire here is for consistency from inventory to inventory.

An example of this, a large limestone quarry that we fly, is shown in Figure 1.  Here baseline data as well as a reclaim tunnel model have been provided to us as a DWG data set.  The illustration of Figure 1 shows these data being used by Topolyst to create a 3D base surface.


Figure 1:  Bottom Data with reclaim tunnel model

Figure 1: Bottom Data with reclaim tunnel model

The primary challenge that we have when receiving a priori data is the accuracy of the data.  We often find that these data were obtained by traditional stereo photogrammetric collection techniques so we do not have a point cloud from which to assess accuracy.  Now, done properly, stereo photogrammetry produces survey grade data.  Unfortunately, much of this a priori data was collected with the surface obstructed by existing stockpiles; in other words, it was not a stockpile free base data mapping.  This means that the stereo compiler had to estimate locations under the existing data.  We find that in most cases, these estimations are simply linear interpolations from one side of the obscured area to the other.  We often find these bottom models extending above the current surface.  It is difficult to tell if the data were incorrectly modeled or if the ground has actually changed from the time the baseline data were collected.

A second big challenge we have with these data are a lack of knowledge by the provider as to the exact datum to which the data are referenced.  We are often concerned with elevation differences of just a few centimeters.  The Geoid model really matters when you are approach survey leveling accuracy goals.  We have found, on more than one occasion, a priori data with an incorrect vertical model.  This usually occurs (at least in the USA) as a result of using the incorrect NAD83 to WGS84 transformation.

Over the past year, we have added a lot of refinements to how Topolyst handles this a priori data.  Those of you who do LIDAR or photogrammetric processing will immediately recognize this as the problem of introducing “breaklines” and “mass points” into a model.  LP360 (Topolyst is just a variant of LP360) has always been a very strong product in terms of breakline modeling.  We have added a few features in this area to improve the modeling as it typically applies in UAS mapping.  We are now at the point where we really do not have any software issues with this sort of modeling but the interpretation problems will always remain.

This type of modeling requires:

  • Direct geopositioning (RTK/PPK) on the drone
  • Multiple surveyed check points on the site for data validation
  • Strong modeling tools such as Topolyst
  • A conference or two with the customer to understand the models
  • A lot of patience when defining stockpiles

Volumes with no a priori data

Here the customer is interested only in the volumes of the piles, without regard to location.  The deliverable is generally a spreadsheet with volume, material type, density and tonnage.  Of course, our customer deliveries are via our cloud data platform, Reckon, so we want the toes to be correctly georeferenced.

If you leave out the correct georeferencing (meaning you compute the volume of the pile but do not necessarily try to align it with an existing map), you have the sort of processing offered by a myriad of web-based solutions such as Kespry.  Under this business model, you typically upload the raw drone images which have been georeferenced by the navigation grade GNSS for x, y and the drone barometric altimeter for elevation.  This typically provides horizontal accuracy on the order of several meters and vertical accuracies at about 5 meters.  So long as the camera is properly calibrated, this methodology leads to volumetric accuracies that are accurate to within about 5%.

We never do these projects without some check points.  These are surveyed image identifiable points that we use to check horizontal and vertical accuracy.

The biggest issues we have encountered with this type of project is the definition of the stockpile toe – it is somewhere between comingled piles, it traces along an embankment such as the pit, the stockpile is in a containment bin and so forth.   There requires a lot of careful toe editing in a three dimensional visualization environment such as Topolyst.

We never have issues with accuracy because we always fly with a direct geopositioning system.  For our MMK, it is a Post-Process Kinematic, PPK, GNSS system.  For the senseFly eBee, it is an onboard RTK system.  We always lay out some checkpoints for project verification.

A very clean mine site with stockpiles sitting on a surface is nearly non-existent (except in our dreams).  While you sometimes encounter sites where you can just manually draw a toe, these sites are nearly always at inventory transfer locations, not working mines.  In fact, of all the mine sites we have surveyed, we have encountered only one “groomed” site (see Figure 2).  Even at this site, the upper left and lower right piles required some disambiguation (wow, that’s a big word!) work to separate the pile edge from encroaching vegetation.

Figure 2: A "groomed" inventory site

Figure 2: A “groomed” inventory site

 Site Contours (“topo”)

A surprising number of customers want contours.  As you know, these are elevation isolines at a particular interval.  Most customer want either 2 foot or 1 foot contour intervals.  These data, in DXF or DWG format, are used as input to mine planning software.  I find this a bit odd since I would think by now that this downstream software would directly ingest a LAS point cloud or at least an elevation model.

Contours are always absolutely referenced to a datum (a “Network”).  This can be a local plant datum or, much more commonly, a mapping horizontal and vertical datum such as a state plane coordinate system for horizontal and NAVD88 with a specific geoid model for vertical (at least in the United States).

You can tie to the datums using either direct geopositioning with onboard RTK/PPK or you can use dense ground control points.  I personally would never collect data that must be tied to a datum without having a few image identifiable checkpoints.  Unfortunately, this means that you will need at least an RTK rover in you equipment kit.

A good rule of thumb for contours is that the accuracy of the elevation data should be at least three times the accuracy of the desired contour interval.  This says if you are going to produce 1 foot (30 cm) contours, you need 4” (10 cm) of vertical accuracy relative to the vertical datum.  When you measure your checkpoints, don’t forget to propagate the error of the base station location (which you might be deriving from an OPUS solution).

Preparing a surface for contour generation is perhaps the most tedious of mine site mapping work.  It is generally the only site mapping you will do that requires full classification of ground points (the source for the contour construction).  An example of 2 foot contours within a mine site is shown in Figure 3.

Figure 3:  An example of 2' contours

Figure 3: An example of 2′ contours

Sites with a high degree of vegetation in areas where the customer wants contour lines will have to be collected with either manual RTK profiling (very tedious!) or with a LIDAR system.  You simply cannot get ground points with image-based Structure from Motion (SfM).  No surprise here – this is why LIDAR was adopted for mapping!

If the customer does not want to pay for LIDAR or manual RTK collection, the vegetated areas should be circumscribed with “low confidence” polygons.  You can either exclude the contouring completely from these areas or classify the interior to vegetation and let the exterior contours just pass though the region.  In any event, the customer must be aware that the data are quite inaccurate in these regions.

The SfM algorithm gets quite “confused” in areas with overhead “noise” such as conveyors and vegetation.  This confusion (actually correlation errors) typically manifests as very low points.  You will need to find and clean these points prior to contour generation.


Product generation for UAS mapping requires a lot of front-end planning.  This planning needs to be product-driven.   If you customer (you, yourself, perhaps) needs only volumes with no tie of the toes to a datum, you can get away with no control so long as some other information such as camera calibration and flying height are correct.  By the way, we recommend never collecting this way since you are precluded from doing any meaningful time series analysis.

On the other hand, most meaningful data (that is, you can quantify the accuracy relative to a datum) will require a very careful control strategy as well as a rigorous processing workflow with the right tools (meaning Topolyst, of course!).  No matter what geopositioning strategy you employ, you should always have some independent methods for verifying accuracy.

If all of this seems a bit daunting, you can get assistance from us.  Remember, our services group is really our R&D lab.  Our real goal is to sell technology to owner/operators and production companies.  No matter what drone you are using, you can always avail of our consulting services.  We have gained a lot of experience over the past few years, mostly by first doing the wrong thing!  Save yourself this time and money by engaging with us!




AirGon BYOD Mapping Kit

I am excited to give you a preview of the AirGon Bring Your Own Drone (BYOD) Mapping Kit.  What better way to introduce a small, low cost approach to mapping than with our BYOD Marketing Rep, Molly.  OK, OK a bit of nepotism – she is my granddaughter.


The BYOD Mapping Kit is a collection of software and training that allows you to do mapping with a low cost DJI drone.  Currently the Phantom and Inspire platforms are supported with the new DJI Mavic soon to be added.

The BYOD Mapping Kit includes:

  • Map Pilot for DJI – Autopilot software for your DJI drone (iOS device required)
  • Agisoft PhotoScan for creating ortho mosaics and 3D point clouds
  • AirGon Topolyst for checking accuracy, adjusting/cleaning data and generating analytic products such as hill shades, volumetrics, digitized mapping features, profiles, topographic contours and similar products
  • A three month Level 1 Subscription to Reckon, our Amazon Web Services hosted analytic data management and delivery portal
  • Web Training
  • Monthly training webinars restricted to AirGon mapping customers

The kit is priced at US $7,990.  Just add your own low cost DJI platform and you are in the mapping business!  This is a great way to get your feet wet with drone mapping.  While this kit is suitable for service providers who want to start out with a conservative approach to drone mapping, it is also a great way for owner/operators to experiment with the viability of this approach to data analysis.  For example, we have been working extensively with a paper mill who uses the BYOD with an Inspire to produce volumetrics for wood chip and log piles.   Another example is an asphalt shingle company who is measuring volumes of raw and processed shingles with a Phantom.   Now granted, you are not going to collect accurate 1 foot contours with a Phantom but you can do some serious analytics that are good enough for many estimation purposes.  This initial BYOD Mapping Kit requires an Apple iOS device (iPhone or iPad) for the autopilot.  We will be adding an Android option by Q1 of 2017.

Of course, you could assemble this yourself by individually acquiring the components.  However, the most important aspect of the BYOD Mapping Kit is the on-going training you will receive as a member of the program.  We have performed over 250 site mapping projects in the past year and have learned what does not work (almost everything!) and what does work.  For example, you will not get anything close to correct without a process for focal length determination (no, it is not what is written on the lens!), elevation bias removal and a number of other tricks.  Our paper mill customer was experiencing volumetric errors of around 25% using a mainstream point cloud/basic volumetric tool prior to engaging with us.  We did some diagnostics on the process and improved their accuracy to within 5% of reference (reference was a very high accuracy survey conducted by AirGon using our AV-900 helicopter with PPK and survey ground control).  In fact, one of the errors is a transformation problem within the DJI recording software itself.  I can assure you that the BYOD Mapping Program will provide a very rapid Return on Investment via the training alone.  If you decide to move up to a survey grade drone, the PhotoScan and Topolyst software remain the best possible solution.  Thus your total investment is preserved as you migrate to more capable systems.

If you just want to collect data but do not want to do routine data processing, no problem.  We can do direct data processing for you via our AirGon Services Group or direct you to one of our AirGon Partner Program members.

If you are interested in becoming an AirGon BYOD Mapper, contact Ashlee Hornbuckle at  She will be happy to share detailed information on this program with you.

AirGon Happenings

This has been a very busy time for our AirGon subsidiary.  While our primary focus is delivering hardware and software tools for high accuracy drone mapping, we also provide a limited amount of services.  These services have been extremely helpful in providing a test bed for our positioning and processing tools.

We continue to test and provide feedback to our LP360/Topolyst software development group regarding tools for improving the overall workflow experience.  We run in to all sorts of complex modeling situations and we try to assess each in terms of tools that would ease and/or improve the workflows.  For example, you will see a new tool to extract 3D vertices from line work in the latest EXP release of Topolyst/LP360 (standalone).  This tool has been added to assist with modeling Low Confidence Areas (LCA) common to point clouds derived from Structure from Motion (SfM) algorithms.

Another recent edition to LP360/Topolyst (all versions) is a new contour smoothing algorithm.  This algorithm is designed to address the problems of meandering contours in areas of small vertical change (the meanderings are caused by either surface or algorithm noise).  You will find that this new tool greatly enhances the appearance of contours in these problem areas.  A typical meandering contour is depicted in Figure 1.

A typical meandering contour

Fig 1: A typical meandering contour

This same map area after processing through the new smoothing algorithm is shown in Figure 2 – a dramatic improvement!  Our algorithm works in model space (the model on which the contours are based) and hence is guaranteed not to introduce topology errors such as contour crossings.

Figure 2:  Contour processed through LP360/Topolyst smoothing algorithm

Figure 2: Contour processed through LP360/Topolyst smoothing algorithm

We have been doing a lot of experiments lately with very low cost drones and cameras (for example, the Inspire Pro) as to their suitability for volumetric mapping.  The results so far are mixed.  We have discovered that, when using no control (an approach often used by folks not well versed in survey grade mapping) that an error in the a priori heights fed into SfM software will result in significant scale errors.  These scale errors are not immediately evident since all of the data look terrific!  I hope to be publishing a report on this within the next 60 days.

We did our first flights under the new Part 107 rules.  We were collecting data near an airport in Class G airspace (something we could not do under the old Section 333 waiver without a special COA).  We always carefully monitor air traffic via a VHF radio.  At one point we heard a pilot declare “I see a drone down there over the mine site!”  This is perfectly OK under Part 107 but takes a bit to get used to!

We are concluding that if you need point clouds from imagery (dense image matching, DIM or Structure from Motion, SfM) to meet the network accuracy requirements for high grade topographic mapping (such as 1 foot contours) you are going to have to use either RTK or PPK on your flight platform.  Even with fairly dense ground control, we are not seeing the accuracy levels we need without RTK/PPK (and we have tried this with different systems and cameras).

We are considering a special training session later this year on drone data workflow processing using PhotoScan/Pix4D and Topolyst.  We also may work with our local flight center to combine this with a Remote Pilot certification training/testing session.  Drop us a line if you are interested in this.



Creating Stockpile Footprints in Topolyst

Several months ago, I introduced Topolyst, our small Unmanned Aerial Systems (sUAS) processing software.  One of the great features in Topolyst are tools to automatically create the footprint (“toe”) of a stockpile and to optionally classify overhead points so that they are excluded from subsequent processing (such as cross sections or volumetric computations).  An example of a stockpile with an overhead conveyor, prior to toe finding and classification, is shown in Figure 1.  As seen in the 3D view in the upper right, the conveyor simply blends in with the stockpile, giving a grossly inaccurate volume for this pile.

A typical stockpile with overhead conveyor

Fig 1: A typical stockpile with overhead conveyor

The data following Topolyst’s automatic stockpile extraction are shown in Figure 2.  Note the toe in the Map and 3D views as well as the automatic classification of the portion of the conveyor within the toe.  This is an extremely powerful tool available in Topolyst[1] (or in LP360 Advanced) that reduces the work of collecting stockpile volumes significantly.  Our initial release of Topolyst also includes a very powerful collection of 3D feature editing tools that make quick work of manually digitizing toes or cleaning up toes in difficult locations (for example, along pit walls) following automatic extraction.

Figure 2:  Automatically extracted stockpile with overhead classification

Figure 2: Automatically extracted stockpile with overhead classification

We have found, from completing many stockpile surveys, that correctly defining the toe is just the beginning!  Mine site operators are keenly interested in consistency.  For example, suppose a stockpile is measured on 5 January to have a volume of 1,000 yards3.  The plant manager sells 500 yards3 from this pile during the period up to the next survey.  She also estimates that 1,000 yards3 were added to the pile.  The next survey should indicate a volume close to 1,500 yards3.  If it does not, the person measuring the volume is the first suspect!

What are the causes of these discrepancies?  The first is, of course, poor estimation.  It is much more difficult to accurately estimate the volume of a pile by “eyeball” than one might guess.  However, we have found the primary culprit to be the definition of the base of the stockpile.

Many mine sites keep a priori survey data that represent the terrain prior to placing any stockpiles (“baseline data” or simply baselines).  Nearly all of the baseline data provided to us has been stereographically collected from a manned aerial survey.  An example is shown in Figure 3.  The magenta points are 3D “mass points” that were derived from a conventional photogrammetric stereo model.

Figure 3:  Baseline data (magenta points) superimposed on a shaded relief of the site

Figure 3: Baseline data (magenta points) superimposed on a shaded relief of the site

The question arises as to how to consistently employ these baselines?  There several approaches that one can take:

  • Get the mine site owner to agree to use the true surface at the time of data collection and abandon the use of “baseline” data. There is a lot of argument for this since it is seldom that the subsurface material will be used.  However, a big one time inventory adjustment may have to be made.
  • Use the 3D toes to define the vertical edge of a stockpile but pull down the base geometry using the baseline data
  • Generate a surface model from the baseline data and then use the toes to only define the planimetric placement of the stockpile.

The third method probably gives the most consistent change of volume record from survey to survey but is it the most technically correct?  This method assumes that all of the material from the toe to the baseline (recall that the baseline is actually under the surface on which the toe lies) could be extracted and used/sold.  This is usually not the case.

As mappers of data, it is important that we advise mine site operators of the advantages and disadvantages of the various methods but, at the end of the day, produce the data according to the customer’s instructions.

Topolyst supports all of the aforementioned techniques for computing volumes (as well as a few others).  For example, the hillshade of Figure 4 is a surface model constructed solely from photogrammetric mass points.  Topolyst has the ability to dynamically use these data as the base where computing volumetrics.  Topolyst also has the ability to generate a LAS file from point, polyline and polygon feature data.  This is extremely useful since this “baseline” LAS can be used in a wide variety of analysis scenarios.

Figure 4:  A surface model constructed from photogrammetric mass points

Figure 4: A surface model constructed from photogrammetric mass points

The features we are adding to Topolyst are being driven by our customer needs, our own needs within our analytic services group and by our research and development efforts aimed at process improvement.  I very definitely welcome your feedback on current and needed features in this great product.

[1] LP360 Advanced (standalone) is feature equivalent to Topolyst