Internalizing LIDAR Data Processing

At GeoCue Group we are involved with customers across the mapping industry, from hardware designers through data collators and data analysts to end users, so we often get asked the question, “how much data processing should I do myself?”  It is a great question.  How much of any given business process you decide to internalize must be a key part of your overall growth strategy.  Unfortunately, we often see companies making one of two classic mistakes when approaching this question about bringing LIDAR data processing in-house.   

The first mistake is to decide to do something just because you can, and being smart engineers and scientists, we all believe we can do LIDAR data processing!  In fact, from a practical point-of-view, this is probably very true.  Most engineering, survey and mapping firms have the technical capability and skills already on staff, or can acquire them by hiring experienced people, to take on LIDAR data processing.  LIDAR data is no more complex than many of the other geospatial data types companies routinely process in-house.  It has some unique aspects to it, but the workflows, tools and techniques are very teachable and can be learned, although there is no substitute for experience.  But, just because you can do a thing does not mean you should do that thing.  For LIDAR data processing, a compelling business case must exist to justify internalizing the process.

Let’s consider the case of a company that is currently subcontracting out all their LIDAR data production.  Typically, they will be receiving geometrically correct, fully classified point clouds as a deliverable.  There are usually two questions such companies ask when looking at what, if any, of that work would be better done internally. First, do we want to and can we afford to get into the data collection business by buying hardware? Second, if we don’t buy a sensor and continue to pay somebody else to collect, how much of the data processing should we do ourselves?  The hardware question is usually driven by larger business considerations than we are discussing here, given the level of capital investment required.  There is also a clear difference between taking on work that involves field data collection and all the logistics that go along with those activities and taking on what is essentially another back-office data processing workflow.  We usually recommend that if you aren’t already doing field work, don’t decide to get into it by starting with LIDAR data collects.  But what to do about the back-office data processing is always an interesting question for any company.  The advantages of bringing LIDAR data processing in-house are often characterized in terms of cost-savings – our subs are charging us way too much! – and schedule control – our subs are always late!

The cost-saving argument can be a strong one, but it requires careful analysis.  When we discuss standing-up a LIDAR data production team of three to five staff, we recommend companies allocate an estimated $65,000 to $95,000 for software licenses, classroom training and updating their IT infrastructure.  The minimum investment, for the smallest operations (single technician, existing IT hardware, limited training) still is going to be in the $20,000 – $25,000 range.  The annual lifecycle cost to maintain this capacity likely will run around 20% per year covering software maintenance, support, and annual training.  So, the five-year capital investment for our 5-person team is going to be around $175,000 or approximately $35,000 per year.  The labor costs are going to be the big variable cost; if you have enough work to keep your new production team busy full-time doing LIDAR data processing, the salary and overhead for a five-person team for the year likely will be significantly larger than your actual capital investment in the software tools.

Unfortunately, it is here that many companies get side-tracked.  They see the large up-front capital investment required for the software and training and struggle to get over that hurdle – because usually someone must be convinced to sign an actual purchase order for this amount! – even though in the long run it is likely the labor costs that will determine the profitability of the venture, not the initial set-up costs.  We often hear from companies that want very detailed breakdowns on pricing and technical capabilities of the software to support their business case but can’t tell us exactly how many people they plan to have working on the data processing or what the annualized labor burden will be.  They focus too much on the software price and not enough on putting the software investment in the context of an overall business case.  Ultimately the actual financial determination in this case is straightforward; if the company is paying more than $35,000 + X per year (where X is the organization’s labor burden based on their projected workload) for LIDAR data processing, they can save money by bringing that data processing in-house.

Control of the data processing, especially schedule control, is the other common justification for internalizing LIDAR data processing.  However, our experience has shown this is often a red-herring.  Poor performance on past projects is more likely to indicate a problem with the choice of subcontractor rather than a process issue.  There is nothing that internalizing LIDAR data processing will do to improve upon best practices. If you do decide to internalize, getting trained on best practices is critical!  We work with the best LIDAR data producers in the world and by applying best practices, being rigorous about workflow management and applying constant quality improvement, they all produce great products on time and on budget.  We firmly believe any company that is willing to invest in the proper software tools and well-trained people can achieve the same results by internalizing the process.  Controlling the data processing does offer the potential to build efficiency improvements into your processes over time that can help reduce delivery schedules, but any credible subcontractor will be doing the same and passing those savings on to their customers anyway.

The second common mistake we see companies make in building their business case for internalizing LIDAR data processing is to delay full implementation or adopt a slow rollout strategy.  LIDAR data processing is one of those activities that benefits greatly from economies of scale and “doing the work.”  Achieving a critical mass of expertise on staff and having a constant workload is very important to a successful internalization program.  Having a plan where staff will work on LIDAR part-time or only at certain times of the year or only on a certain customer’s projects is usually a very high-risk choice.  Even if financially the business case appears strong, we often caution customers that if they aren’t going to truly prioritize LIDAR data processing as a core competency and build a sustainable pipeline of work from Day 1, they may be better off staying with a subcontractor.  Often rather than slowly ramping-up to a successful deployment, they end-up slow-walking down a dead-end path that leaves them with only a bare minimum of internal capability, though having invested heavily in the software tools and training.  In the worst-case scenario, these are the companies that we see exit the LIDAR data processing business after 18-36 months with little to show for their investment.  The best way to mitigate the risk of a stalled or under-utilized deployment is to avoid a piece-meal deployment plan; if the financial business case for internalizing LIDAR data processing is there, then be aggressive!

Get the PDF – Internalizing LIDAR Data Processing

FacebookTwitterGoogle+LinkedInShare

GeoCue Launches GetLidar.com to Support Hurricane Recovery

When natural disasters occur, one of the more pressing needs of disaster recovery teams is access to trust-worthy, pre-event data. Typical needs are for recent aerial images and elevation data, preferably orthoimages and LIDAR data. It can be difficult to find sources for this data, sources that can be easily accessed from any location, trust worthy with respect to data integrity and accuracy, and which provide a simple, straightforward interface to extract and deliver data to local computers for processing. GetLidar.com provides such an access point for data relevant to the areas in Florida, Texas and Puerto Rico heavily damaged by hurricanes Harvey, Irma and Maria.

GetLidar.com provides free and direct access to pre-event imagery and Lidar data including:

  • LIDAR data in both American Society for Photogrammetry and Remote Sensing (ASPRS) LAS format as well as compressed LAZ format.
  • 50 cm orthophotography in US Geologic Survey (USGS) quarter quad format for the Harris county area, provided by the Texas Natural Resources Information System (TNRIS). Collected on behalf of the Houston-Galveston Area Council (H-GAC)
  • 2004 USACE LIDAR and 2015 National Oceanic and Atmospheric Administration (NOAA) NGS Topobathy LIDAR data in LAS or LAZ format from NOAA for Puerto Rico.
  • US Department of Agriculture National Aerial Imagery Program (NAIP) for all areas
  • Landsat 8 data for all areas

Read Complete Article: GeoCue Launches GetLidar.com to Support Hurricane Recovery

What We Do

Since this is our first general distribution newsletter in a few years (see the introduction), I thought it would be a good idea to review a bit of the history of GeoCue Group Inc. and what we do.

 

We formed the company under the name NIIRS10 (that’s another story!) in July of 2003.  Thus this is our 14th birthday!  Prior to forming this company, I was the CEO of Z/I Imaging, a global photogrammetry hardware and software company.  Z/I was acquired by Intergraph in 2002 (the point at which I left the company).  Intergraph was, of course, subsequently acquired by Hexagon, the new home of Z/I Imaging.  Our core group of developers joined me from Z/I and thus have a long and rich history in developing advanced photogrammetric and LIDAR hardware and tools.

The first product set of GeoCue (at that time, NIIRS10) was GeoCue, a set of enterprise workflow management tools aimed at organizations who need to manage professional LIDAR and photogrammetric workflows.  GeoCue was rapidly adopted by most LIDAR production companies (in those years, primarily airborne laser scanning, ALS) in North America where it remains strong to this day.

Most of our LIDAR workflows encompassed the Terrasolid processing toolset.  This caused us to form a close relationship with Terrasolid OY of Finland.  In 2005, Terrasolid asked us to become their North American distributor, providing software, training and maintenance.  This, too, we maintain to this day.

In 2009, we recognized that LIDAR was being exploited, to a small degree, by end users of LIDAR data such as federal, state and local governments.  We quickly entered this business by acquiring QCoherent Software LLC of Colorado Springs, Colorado.  QCoherent was the company that initially developed LP360, an ESRI extension that allowed advanced exploitation of point cloud data directly in an ArcGIS desktop environment.  We quickly internalized this product and developed a robust standalone version for use outside ArcGIS.  Since that time, LP360 has become the standard for LIDAR exploitation (especially QC) in a great many agencies including the USGS, the USACE, the USDA, Forest Service, many water management districts and so forth.  LP360 is the desktop LIDAR tool against which all others are judged.

In 2012, we entered the small unmanned aerial systems (sUAS) or “drone” business with the aim of bringing tools such as LP360 to bear on the point clouds produced by dense image matching.  After a very careful evaluation (from an accuracy and model conformance point of view) we selected Agisoft PhotoScan and Pix4D Mapper as the tools of choice for point cloud generation.  We developed agreements with these two companies and began sales and support of these products.  We also started down the path of adding powerful sUAS mapping tools to LP360, such as volumetric analysis.

In 2014 we began development of a global navigation satellite system (GNSS) post-processed kinematic (PP)K) direct geopositioning system for our high-end drone, the AV-900.  This resulted in the AirGon Sensor Package (ASP), one of the most accurate positioning systems for rotary wing drones. The ASP is based on a Septentrio GNSS engine specifically designed for sUAS operations.  Since we were effectively entering a new market, we decided to launch this through a wholly owned subsidiary, AirGon LLC (www.airgon.com).  We also developed an Amazon Web Services (AWS) drone products hosting site called AirGon Reckon.  Reckon serves as a data repository and information sharing portal for companies with multiple mapping sites (particularly the aggregate mining industry).

In 2015, AirGon began offering limited drone services for companies who were testing the waters of drone-based mapping but did not want to internalize the operations until the workflows and products were proven.  Our aim is not to compete with our customers, of course, but to evangelize this game changing technology.  We were early to receive an FAA 333 exemption and had certified Part 107 Remote Pilots a few days after that new rule went in to effect.  We have now flown over 600 missions with a wide variety of potential drone users from aggregate mining to paper mills.

Since we have been in business, we have always done some amount of bespoke software development when that development advanced our commercial products.  These bespoke activities ranged from funded additions to our core software all the way to custom workflow solutions.  An example is a very high throughput LandSat change detection system developed for MDA Information Systems.  This system is controlled and managed by a GeoCue Distributed Processing System.

In 2015, we entered into an agreement with Teledyne Technologies (specifically Teledyne Brown Engineering) to develop an Amazon Web Services based system to manage and disseminate data from their Multi-User System for Earth Sensing (MUSES), a multi-sensor host platform mounted on the International Space Station.  This development, the Earth Sensor Portal (ESP), is being offered as a commercial hosting platform for data from satellite imagery to LIDAR data.

We saw a resurgence of interest in mobile laser scanning (MLS) in late 2016/early 2017.  Our MLS software suite had a bit of a hole in that we did not have a package specifically aimed at asset collection from MLS data.  We have been aware of the Orbit offerings from Orbit GT of Belgium for some time.  Earlier this year, we re-evaluated their offering and signed on as their North American distributor.

 

So this brings us to the present!  We now focus on several business areas:

LIDAR – Both Production and exploitation.  Our portfolio includes:

  • The GeoCue product family for workflow management from production to QC.
  • The Terrasolid family of products for industrial strength geometric correction and processing of both ALS and MLS data.
  • LP360 for high performance data editing, QC and specialized functions such as hydro modeling on the product side and a rich exploitation environment (in ESRI and standalone) on the LIDAR consumer side.
  • Orbit GT for feature extraction from MLS data.
  • LIDAR Server for local data management/distribution.
  • Earth Sensor Portal for enterprise data hosting and dissemination.

sUAS (Small Unmanned Aerial Systems) – Data collection, processing and management.  These offerings are via our AirGon subsidiary.  The main thing to remember about AirGon is that we can bootstrap you into the drone mapping business and we can do it in a very cost effective yet completely professional manner.  Our offerings include:

  • Agisoft PhotoScan, Pix4D Mapper – software for generating point clouds and orthomosaics from drone collected imagery.  We offer the software, support and training.
  • LP360 (sUAS licensing level) – The same LP360 as used for LIDAR.  We offer it at a lower price point for small area mapping, such as with drones.  LP360 provides the full workflow from point cloud ingest (point clouds from imagery and LIDAR) to derived product output.  This is really the most powerful tool kit on the market for processing drone data where the desired outputs are high accuracy mapping products.
  • Bring Your Own Drone (BYOD) Mapping Kit – this is a kit of PhotoScan, LP360, a Reckon subscription and comprehensive training that enables data production from any drone.  It has a special emphasis on mapping using low cost DJI drones such as the Inspire 2 or the Phantom 4 Pro.  The BYOD plus a DJI is a great starting point for entering the drone mapping business and we train you in how to be successful.
  • AirGon Reckon – our AWS-hosted data dissemination system for drone data.  This is a great tool for service providers who want to deliver data in a professional, cloud hosted manner to their customers.  Service provider partners can actually use Reckon as a revenue generator for their business.
  • LOKI – This is the most exciting product we have developed in the hardware arena in some time.  It allows you to add PPK direct geopositioning to a Phantom 4 Pro, an Inspire 2 or any drone with a camera equipped with a flash hot shoe.  There is a separate article in this newsletter regarding LOKI.  Add this to a BYOD and you truly will have a professional mapping kit with direct geopositioning using low cost DJI drones!  This is a financial game changer.

ESP (Earth Sensor Portal) – ESP is an Amazon Web Services hosted platform for LIDAR, Imagery and related products.  It is a great dissemination platform for agencies who acquire data such as LIDAR and need it securely backed up and made available to stakeholders (including the public) via a web facing portal.  ESP includes the idea of workflow so agencies can have us integrate in, for example, a QC workflow, allowing their collection contractors to post data directly to their ESP portal.   This is a subscription model that offloads all need for server technology as well as concerns of your own firewall being maliciously penetrated via your data portal.  This is exciting stuff!!

Bespoke Solutions – GeoCue continues to offer custom development when it adds value to our strategic product portfolio.  For example, if you need a niche tool added to LP360, consider discussing a bespoke addition with us. It will show up in the standard code base, maintained as part of the global product.  This prevents you from getting stuck in the situation of having to contract specifically for updates.  On the larger side of the equation, we have developed very large projects for various clients, primarily around LIDAR/imagery data processing, management and dissemination.

 

As you can see, we have a complete product set for several different imagery/LIDAR related production and exploitation scenarios.  We are very happy to entertain your inquiries ranging from simple product questions to those difficult things you encounter in your workflows.  So please keep us in mind when you are thinking of adding a workflow or improving the ones you have.

 

Drone Mapping – Business Models Revisited

I am currently attending the 2017 NSSGA/CONEXPO exposition.  One of the keynotes from the National Stone, Sand and Gravel Association (NSSGA) conference focused on the rate of change of technology in the mining industry and the scope of operations that are covered by these technologies.  Of course, one of the examples was the use of drones.  The gist of the discussion was that some of these technologies are in their formative stages; we do not yet fully appreciate the scope of operational affect they will have but to prosper, knowledge of these systems must be internalized.

One thing is very clear – frequent and repetitive mapping will be required to support the automated machinery that is now appearing on advanced sites.  You cannot program a haul truck for autonomous operations if you do not know the location of the road!  Complicating this issue is the fact that the road location changes nearly daily due to the operation itself.

This future trajectory says that mine site mapping will need to become an internal operation.  It will be impractical from both a logistics and cost perspective to outsource drone mapping services.  A second strong consideration is the rapidity with which drone technology is changing.  I think amortizing the cost of a drone over more than 12 months is just not realistic.

Drones are simply platforms for cameras and other sensors (for example, profilers, laser scanners and so forth).  A drone without a sensor is a fun toy to fly but it is not going to have much use in operations!  I am very excited about new platforms from commercial drone companies (mostly DJI).  These new drones include decent cameras in that they now incorporate larger sensors and hybrid shutters.  You can do a reasonable job of mapping with these yet still use them for inspection videos.

DJI Inspire

So I think what we are seeing is the beginning of the end of the purpose-built drone.  You will be able to purchase drones from DJI (and perhaps others) that are nearly a consumable.  You can use the same drone for inspections as you use for mapping.  This is a very important consideration since this greatly simplifies the training of users.

The bottom line here is this – we are seeing the beginning of drones as an everyday tool for mining, industry and construction.  The proper model is going to be internal control of not only flying the systems but also processing the data.  When you need a quick check of a pulley on a conveyor, you will want an internal staff member to quickly fly the inspection job and post the resultant video.  No need to have a third-party system or contractor involved.  It just complicates the flow and adds expense.  This is really the motivation behind our Bring Your Own Drone (BYOD) Mapping Kit.  It lets you use a low-cost drone such as the DJI Inspire to do serious mapping without a lot of complicated leasing or outsourced data processing arrangements.  It also allows you to use the same platform for inspection that you use for mapping.  Give us a call to see how well this solution will meet your specific needs.

What Miners Want

I attended the Commercial UAV Expo in Las Vegas at the end of October.  I gave a talk entitled “Mine Site Mapping – One Year In.”  This talk was on our experiences with performing mine site mapping services with our AirGon Services group.   Our services group is primarily about Research and Development (R&D).  We use our engagements with mining companies to discover the products that they need, accuracy levels and, most of all, how to reliably create these products.  These experiences inform both the development of our technology (the MMK, Topolyst, Reckon, the BYOD Mapping Kit) but also help us develop best practices for both collection and processing.

As I prepared for this presentation, I reviewed the mine site mapping projects we have performed over the past several years to tabulate the products our customers have requested.  These turned out to be, in decreasing order of popularity:

  • Site Volumetrics with a priori base line data
  • Site Volumetrics with no prior data
  • Site contours (“topo”) – 2 foot interval
  • Site Contours – 1 foot interval
  • Time series volumetrics (“borrow pit”)

In every case, the customer desired a site orthophoto.  In fact, they usually want an ortho of the entire site with analytic products of a subsection of the mine site.

I thought in this month’s section, I would review these products from the acquisition and processing point of view.

 Volumetrics with baseline data

I have written a few articles about injecting a priori data into a mapping project.  This is the situation where, at some time in the past, the customer has done a site survey and wants to use these data as the bottom surface of stockpiles.  Their primary desire here is for consistency from inventory to inventory.

An example of this, a large limestone quarry that we fly, is shown in Figure 1.  Here baseline data as well as a reclaim tunnel model have been provided to us as a DWG data set.  The illustration of Figure 1 shows these data being used by Topolyst to create a 3D base surface.

 

Figure 1:  Bottom Data with reclaim tunnel model

Figure 1: Bottom Data with reclaim tunnel model

The primary challenge that we have when receiving a priori data is the accuracy of the data.  We often find that these data were obtained by traditional stereo photogrammetric collection techniques so we do not have a point cloud from which to assess accuracy.  Now, done properly, stereo photogrammetry produces survey grade data.  Unfortunately, much of this a priori data was collected with the surface obstructed by existing stockpiles; in other words, it was not a stockpile free base data mapping.  This means that the stereo compiler had to estimate locations under the existing data.  We find that in most cases, these estimations are simply linear interpolations from one side of the obscured area to the other.  We often find these bottom models extending above the current surface.  It is difficult to tell if the data were incorrectly modeled or if the ground has actually changed from the time the baseline data were collected.

A second big challenge we have with these data are a lack of knowledge by the provider as to the exact datum to which the data are referenced.  We are often concerned with elevation differences of just a few centimeters.  The Geoid model really matters when you are approach survey leveling accuracy goals.  We have found, on more than one occasion, a priori data with an incorrect vertical model.  This usually occurs (at least in the USA) as a result of using the incorrect NAD83 to WGS84 transformation.

Over the past year, we have added a lot of refinements to how Topolyst handles this a priori data.  Those of you who do LIDAR or photogrammetric processing will immediately recognize this as the problem of introducing “breaklines” and “mass points” into a model.  LP360 (Topolyst is just a variant of LP360) has always been a very strong product in terms of breakline modeling.  We have added a few features in this area to improve the modeling as it typically applies in UAS mapping.  We are now at the point where we really do not have any software issues with this sort of modeling but the interpretation problems will always remain.

This type of modeling requires:

  • Direct geopositioning (RTK/PPK) on the drone
  • Multiple surveyed check points on the site for data validation
  • Strong modeling tools such as Topolyst
  • A conference or two with the customer to understand the models
  • A lot of patience when defining stockpiles

Volumes with no a priori data

Here the customer is interested only in the volumes of the piles, without regard to location.  The deliverable is generally a spreadsheet with volume, material type, density and tonnage.  Of course, our customer deliveries are via our cloud data platform, Reckon, so we want the toes to be correctly georeferenced.

If you leave out the correct georeferencing (meaning you compute the volume of the pile but do not necessarily try to align it with an existing map), you have the sort of processing offered by a myriad of web-based solutions such as Kespry.  Under this business model, you typically upload the raw drone images which have been georeferenced by the navigation grade GNSS for x, y and the drone barometric altimeter for elevation.  This typically provides horizontal accuracy on the order of several meters and vertical accuracies at about 5 meters.  So long as the camera is properly calibrated, this methodology leads to volumetric accuracies that are accurate to within about 5%.

We never do these projects without some check points.  These are surveyed image identifiable points that we use to check horizontal and vertical accuracy.

The biggest issues we have encountered with this type of project is the definition of the stockpile toe – it is somewhere between comingled piles, it traces along an embankment such as the pit, the stockpile is in a containment bin and so forth.   There requires a lot of careful toe editing in a three dimensional visualization environment such as Topolyst.

We never have issues with accuracy because we always fly with a direct geopositioning system.  For our MMK, it is a Post-Process Kinematic, PPK, GNSS system.  For the senseFly eBee, it is an onboard RTK system.  We always lay out some checkpoints for project verification.

A very clean mine site with stockpiles sitting on a surface is nearly non-existent (except in our dreams).  While you sometimes encounter sites where you can just manually draw a toe, these sites are nearly always at inventory transfer locations, not working mines.  In fact, of all the mine sites we have surveyed, we have encountered only one “groomed” site (see Figure 2).  Even at this site, the upper left and lower right piles required some disambiguation (wow, that’s a big word!) work to separate the pile edge from encroaching vegetation.

Figure 2: A "groomed" inventory site

Figure 2: A “groomed” inventory site

 Site Contours (“topo”)

A surprising number of customers want contours.  As you know, these are elevation isolines at a particular interval.  Most customer want either 2 foot or 1 foot contour intervals.  These data, in DXF or DWG format, are used as input to mine planning software.  I find this a bit odd since I would think by now that this downstream software would directly ingest a LAS point cloud or at least an elevation model.

Contours are always absolutely referenced to a datum (a “Network”).  This can be a local plant datum or, much more commonly, a mapping horizontal and vertical datum such as a state plane coordinate system for horizontal and NAVD88 with a specific geoid model for vertical (at least in the United States).

You can tie to the datums using either direct geopositioning with onboard RTK/PPK or you can use dense ground control points.  I personally would never collect data that must be tied to a datum without having a few image identifiable checkpoints.  Unfortunately, this means that you will need at least an RTK rover in you equipment kit.

A good rule of thumb for contours is that the accuracy of the elevation data should be at least three times the accuracy of the desired contour interval.  This says if you are going to produce 1 foot (30 cm) contours, you need 4” (10 cm) of vertical accuracy relative to the vertical datum.  When you measure your checkpoints, don’t forget to propagate the error of the base station location (which you might be deriving from an OPUS solution).

Preparing a surface for contour generation is perhaps the most tedious of mine site mapping work.  It is generally the only site mapping you will do that requires full classification of ground points (the source for the contour construction).  An example of 2 foot contours within a mine site is shown in Figure 3.

Figure 3:  An example of 2' contours

Figure 3: An example of 2′ contours

Sites with a high degree of vegetation in areas where the customer wants contour lines will have to be collected with either manual RTK profiling (very tedious!) or with a LIDAR system.  You simply cannot get ground points with image-based Structure from Motion (SfM).  No surprise here – this is why LIDAR was adopted for mapping!

If the customer does not want to pay for LIDAR or manual RTK collection, the vegetated areas should be circumscribed with “low confidence” polygons.  You can either exclude the contouring completely from these areas or classify the interior to vegetation and let the exterior contours just pass though the region.  In any event, the customer must be aware that the data are quite inaccurate in these regions.

The SfM algorithm gets quite “confused” in areas with overhead “noise” such as conveyors and vegetation.  This confusion (actually correlation errors) typically manifests as very low points.  You will need to find and clean these points prior to contour generation.

Conclusions

Product generation for UAS mapping requires a lot of front-end planning.  This planning needs to be product-driven.   If you customer (you, yourself, perhaps) needs only volumes with no tie of the toes to a datum, you can get away with no control so long as some other information such as camera calibration and flying height are correct.  By the way, we recommend never collecting this way since you are precluded from doing any meaningful time series analysis.

On the other hand, most meaningful data (that is, you can quantify the accuracy relative to a datum) will require a very careful control strategy as well as a rigorous processing workflow with the right tools (meaning Topolyst, of course!).  No matter what geopositioning strategy you employ, you should always have some independent methods for verifying accuracy.

If all of this seems a bit daunting, you can get assistance from us.  Remember, our services group is really our R&D lab.  Our real goal is to sell technology to owner/operators and production companies.  No matter what drone you are using, you can always avail of our consulting services.  We have gained a lot of experience over the past few years, mostly by first doing the wrong thing!  Save yourself this time and money by engaging with us!

 

 

 

AirGon Happenings

I am pleased to announce that AirGon’s request for amendment to its Section 333 waiver for flying commercial small Unmanned Aerial Systems (sUAS) was approved in April.  Our amendment adds all current and future 333 approved aircraft to our 333.  AirGon can now fly any sUAS that has ever been approved by the FAA as well as all future approved systems.  This list currently contains 1,150 different sUAS (AirGon’s own AV-900 is number 207 on the list).  This provides us a lot of flexibility in working with clients; for example, in situations where a glider sUAS is more efficient than a rotor craft.

The FAA has also recently streamlined the process of obtaining an N number for a sUAS.  Prior to the change, a paper process that required several months was the only option.  Now an online system is available, greatly simplifying this procedure.  Note that this is not the new online registration system for hobby drones but rather the system used for obtaining an N number for a manned aircraft (if you are confused, join the club!).  Combined with our new 333 amendment, we can now get a new aircraft legally operating within days.

We continue to do a lot of work to optimize the accuracy of point clouds derived from dense image matching (DIM).  DIM are the data of choice for sUAS mapping since they can be generated from low cost prosumer cameras using standard application software such as Pix4D Mapper or PhotoScan.  The question always remains as to how good these data really are.

It has taken us a lot of experimentation and analysis but we think we have fleshed out a procedure for assuring good absolute vertical accuracy.  It involves the use of Real Time Kinematic (RTK) Global Navigation Satellite System (GNSS) positioning on the sUAS, a local base station that we tie into the national Continuously Operating Reference Station (CORS) network and the National Geodetic Survey’s Online Positioning User Service (OPUS) to “anchor” the project to the network.  We have also discovered that high vertical accuracy cannot be obtained without camera calibration.  We typically use an in situ process for calibration.  We have flown many dozens of sites (primarily mining), giving us a rich set of test data.

I cannot over emphasize how critical network vertical accuracy is.  Most customers want elevation maps of their sites.  These are usually delivered as contour vector files.  As we all know, a 1 foot contour requires vertical accuracy of 1/3 of a foot.  This is a very tight requirement!  A three inch vertical bias error over an acre is an error of about 400 cubic yards – this is significant.

We see a lot of drone companies processing site data with no control and no RTK/PPK.  While, with the introduction of scale into the model (many companies do not even do this), one might obtain reasonable difference computations (such as volumes), the network accuracy is very poor (obtained from the airborne navigation grade GNSS only) and hence the data are of limited use.  We have discovered that these techniques (where no control and/or RTK/PPK is used) can result in the vertical scale being incorrectly computed.  This means that even differential measurements are not accurate.  Why spend all of the money to collect these data if they are of unknown accuracy?

A more difficult area that we have studied over the past several years is what I refer to as “conformance.”  That is, how well does the DIM actually fit the object being imaged?  DIM processing software (again, such as Pix4D and PhotoScan) do a miraculous job correlating a 3D surface model from highly redundant imagery using the general class of algorithm called Structure from Motion (SfM).  In addition to the obvious areas where SfM fails (deep shadow, thin linear objects such as poles and wires), a lot of subtle errors occur due to the filtering that is performed by the SfM post-extraction algorithms.  These filtering algorithms are designed to remove noise from the surface model.  Unfortunately, any filtering will also remove true signal, distorting the surface model.

We are working with several of our mining customers to quantify these errors and, once these errors are characterized, to develop best practices to minimize or at least recognize when and where they occur.  An example of an analysis is shown in Figure 1.  Here we are analyzing a small pile (roughly outlined in orange) of very coarse aggregates with a volume of about 340 cubic yards.  This site was flown with a very high end manned aircraft LIDAR system and with AirGon’s AV-900 equipped with our RTK system.  The DIM was created using Agisoft PhotoScan.  We obtained excellent accuracy as determined by a number of signalized (meaning ground targets visible in the imagery) control and supplemental topo only shots.  We used in situ calibration to calibrate the camera (a Sony NEX-5 with a 16 mm pancake lens).

As can be seen in Figure 1, we created a series of cross sections over the test pile.  These cross sections were generated using the Cross Section Point Cloud Task (PCT) in LP360/Topolyst.  This tool drapes cross sections at a user specified interval, conflating the elevation value from the user specified point cloud.  We ran the task twice, conflating Z first from the LIDAR point cloud and then from the DIM.   In Figure 1 we have drawn a profile over one of the cross sections with the result visible in the profile view.  The red cross section is derived from the LIDAR and the green from the DIM.

Comparing LIDAR (red) to DIM (green)

Comparing LIDAR (red) to DIM (green)

Note that the DIM cross section (green) is considerably smoother than the LIDAR cross section (red).  This is caused by several factors:

  • The aggregate of this particular pile is very coarse with some rocks over 2 feet in diameter. This leaves a very undulating surface.  The LIDAR is fairly faithfully following this surface whereas the DIM is averaging over the surface.
  • The AV-900 flight was rather high and the data was collected with a 16 mm lens. This gave a ground sample distance (GSD) a little higher than is typical for this type project.
  • Due to the coarseness of the aggregate, significant pits appear between the rocks, creating deep shadows. SfM algorithms tend to blur in these regions, rendering the elevation less accurate than in areas of low shadow and good texture.

The impact of lower conformance is a function of both the material and the size of the stockpile (if stockpiles are what you are measuring).  For small piles with very coarse material (as is the case in this example) a volumetric difference between LIDAR and SfM can be as great as 20%.  On larger piles with finer aggregates, the conformance is significantly better.   For example, in this same test project, we observed less than 0.25% difference between LIDAR and the DIM on a pile of #5 gravel containing about 30,000 cubic yards.

There still remains the question of which is more accurate – the volume as computed from the LIDAR or the volume as computed from the DIM?  I think that if the LIDAR are collected with a post spacing ½ the diameter of the average rock, the LIDAR will be the most accurate (assuming that it is well calibrated and flown at very low altitude).   However, the DIM is certainly sufficiently accurate for the vast majority of aggregate volumetric work, so long as a very strict adherence to collection and processing best practices is followed.  For most high accuracy volumetric projects, manned LIDAR flights are prohibitively expensive.

We continue to do many experiments with local and network accuracy as well as methods to improve and quantify conformance.  I’ll report our results here and in other articles as we continue to build our knowledge base.

December 2015

Well, like everyone is saying, I cannot believe that 2015 is drawing to a close.  Where did the year go?  This will be a short note – I have to get some Christmas shopping done!

We finally released LP360 this past week.  Our early postponement was to squeeze new features into the products whereas the later delays were to ensure stability.  We have been using the products in our internal production processes for the past few months.  This has been a great experience in terms of fine tuning features and monitoring stability.

One of the things we have been focused on is production processes.  Of course, repeatable process is what the GeoCue workflow products are all about so this is not a new thing for us.  We have always appreciated that quality is most directly related to rigorously controlled processes, not to the heroics of individual production folks.  Now that we are doing a lot of field work, we are examining ways to improve this aspect of the process.  For example, the field work associated with acquiring mine site data with an sUAS is tricky.  It is not that the individual steps are particularly complicated, it is that there are a lot of steps that must be successfully accomplished in a specific order.  We are currently using a lot of checklists.  This is the minimum required to be successful.  How do we improve this process in harsh environments that often lack connectivity to the outside world?  No clear solutions yet but we are working on it!

We have also been working on simplifying our business structure.  We acquired QCoherent Software LLC (a Colorado-based company) in 2009.  Over time, we have moved all of the company to our headquarters in Huntsville, Alabama.  We are finally absorbing the corporate structure into GeoCue Group.  You will not notice any changes other than communications related to LP360 and LIDAR Server now being from GeoCue Group Inc.

We will soon be releasing Service Pack 4 for the GeoCue product set.  The next major release will be in 2016.  We are working on some simplifications to the product as well as better schemes for archiving products.  I think you will appreciate these changes.

Terrasolid is now offering a true 64 bit version for MicroStation CONNECT (the version of MicroStation that succeeds V8i).  Maintenance customers will have access to this new version of Terrasolid tools.  We do caution however, that this is still in the beta stage and is probably not sufficiently stable or feature complete to introduce into production.   We estimate that this new version will be production ready by the end of Q1 of 2016.

During this past year we have learned an incredible amount about how to design sUAS for mapping as well as the tools and processes needed to create products.  This overall workflow is fundamentally changing small area mapping but it is not easy to achieve accurate and repeatable results.  We now have come to realize that mine site mapping requires control 100% of the time and establishing this in dense image matching workflows is not at all straightforward.  I think this is good news for the professionals out there providing these services.  In the area of metric mapping, you will not be easily displaced by someone buying an inexpensive drone and a point cloud generation software application!

All of us here at GeoCue Group wish you a very relaxing holiday season and the very best of success in 2016!

November 2015

First of all, I have to apologize for us not releasing LP360 at the end of October as I promised in the last issue! We needed to add a feature to LP360 to assist with very dense data sets. This new “Classify by Statistics” point cloud task can be used for a variety of functions, among them data thinning. We also took the time to tune a number of different performance bottlenecks, including clipping contours to a project boundary.

We have entered the services business in a small way. We have encountered a number of mine operators who want to collect maps and volumes of their sites using small unmanned aerial systems (sUAS) but they are not yet ready to internalize the process. To assist with this transition, we now offer flying and data processing services to these customers. We do the bulk of this production in LP360 with a bit in our GeoCue workflow products.

One thing this foray into production is teaching us is the value of removing “clicks” from the production process! We are usually so focused on adding advanced features to our products that we overlook the simple things that can greatly improve the speed of a workflow. For example, we changed the destination class selector in the profile classification tool in LP360 to remember the destination class. This is a simple change that took a developer about 1 hour to incorporate. It now saves a data production technician many clicks in the classification process. You can be assured that we will have a renewed focus on basic productivity going forward!

We are in the midst of product planning for 2016. I think we have some pretty exciting developments in the pipeline. I will highlight a few here; you will be hearing details of these as 2016 rolls out.

On the GeoCue workflow software front, we intend to focus some energy on simplifying the product. When we first brought GeoCue to the market in 2004 (we started building the product in 2003), the average production shop employed technicians who were accustomed to “tool box” software with very complex features that a user could stitch together into a workflow that suited their particular needs (sound a lot like the ArcGIS desktop products, right?). Now we find many organizations who are constructing workflows that they would like to be more “black box” and that just work out of the box. This is not unusual to see this occur as a technology such as LIDAR matures.

Our cloud-hosted products, LIDAR Server and Reckon, will see major capability additions in 2016. LIDAR Server will continue to be the premiere solution for managing and delivering LIDAR data in point cloud format whereas Reckon is the life cycle management and storage environment for mine site sUAS mapping. Already we have added the ability for Reckon to serve as a WMS server for clients such as ArcMap and Autodesk. This allows a mine site engineer to bring up to date site imagery and vectors into these environments without the need to take physical delivery of this voluminous data.

Reckon is our first “subscription-only” product. Hosted in Amazon Web Services, this data portal is evolving into a system not only for reviewing mine site mapping by site and mission date but also for annotating stockpiles and planning the next mission. The huge advantage of this over more traditional means, such as trying to use Google Earth imagery, is that the mine engineer can use the most recent view of the mine (for example, last month’s flight). This is nearly a requirement since the topology of these sites are so dynamic.

We are also modernizing the display architecture of LP360 to take advantage of advanced features in workstations and laptop video hardware. Advanced capabilities such as hardware rendering that were once found only in high end video cards are now common place, even in lower end laptops. The 2016 software base will “discover” graphics features and use whatever hardware capabilities found in the discovery process. We will, of course, support fallback to software algorithms for those machines lacking advanced features.

2015 will be the final release year for GeoCue Group software products to support Windows XP as well as 32 bit operating systems (other than our 32 bit LP360 extension for ArcMap). The “experimental” and final releases of 2016 will support Windows 7 and beyond, in 64 bit only. It has been quite some time since Microsoft ended support for Windows XP. We no longer receive development support for XP in Microsoft Visual Studio (our development environment) and thus must retire support for this venerable operating system.   Many other vendors such as ESRI have already ended support for XP so those still on this system should plan accordingly.

For those of you in the United States, we wish you a very enjoyable Thanksgiving holiday!

Best Regards,

Lewis

Development, Windows 10, and EXP LP360 2015.1

We are entering the hottest and most humid part of the year here in Alabama so, like January, this is a good time to stay indoors and do things like system design!

We do a lot of system engineering and development work here at GeoCue. This ranges anywhere from customizations of our GeoCue workflow tools to new (“green field”) developments. I have noticed that more and more frequently we consider cloud hosted environments such as Microsoft Azure and Amazon Web Services (AWS) as our solution platform. Besides fractional scaling (add more when you need it, remove it when no longer needed and pay for only what you use), we really like the data storage options. Prohibitively expensive just a few years ago, you can now consider archiving all of your production data in the cloud. For example AWS offers its Glacier archival storage for less than $125 per Terabyte per year. There is just no way to do this on premise with the level of assurance of no data loss that you can get with AWS. At any rate, cloud deployed solutions make more and more sense as this paradigm matures. It is rather ironic since when my great granddaddy started in this business he was renting access to a remote time share system. The more things change, the more they stay the same!

I just upgraded my workstation-class laptop to Windows 10. Since I was moving from Windows 8.1, this has been a positive experience. I have not yet had a lot of experience with the various capabilities so the jury remains out on which is better for workstation (no Microsoft, we don’t do image and LIDAR processing on tablet PCs !!) – Windows 7 or Windows 10? There is a detailed story of my installation experience in this newsletter.

If you are an LP360 customer on maintenance (thank you very much!), you may have already installed our 2015.1 EXP release (yes, it finally went out the door!). The purpose of the Experimental release is to provide you with an early look at some of the features we are adding to the official release. Examples in the current EXP release include three new major tools:

  • Live View – A completely redesigned, real time interface for filtering the display
  • A Ground Cleaner Point Cloud Task (PCT) – this new PCT (available at the Standard level) is a tool that allows you to very rapidly clean up areas of ground classification that are incomplete (a common problem in delivered LIDAR data)
  • An Automatic Stockpile Toe Extractor PCT – This is a tool still in beta form. It allows you to automatically create polygons at the base of a stockpile by simply selecting a point on the pile. This tool really speeds up volumetric analysis

If you are not currently on maintenance, contact Ashlee Hornbuckle at ahornbuckle@lp360.com and she can assist you with returning to the program.

We will soon be moving our licensing to a cloud-hosted solution. This will make self-service of common licensing operations possible. We’ll first move LP360 and then look at our other products. This will take a bit since we have to do this development in-house.

OK, enough said! I have to get back to work! Have a great remainder of the summer!

Best Regards,

Lewis

LP360 Testing, Metric Mapping Kits and a New LIDAR Server

I spent the week of the Fourth of July at my beautiful retreat on the lovely Tennessee River, ostensibly on holiday. In reality, I was sitting with my laptop at the kitchen table writing magazine articles. I am not complaining though – the view is fabulous!

I was also very busy delaying our experimental release of LP360. I did a complete run-through of our new Live View display filter, sending multiple suggestions for tweaks to the development crew and testing out their new builds. On some days we cycled three builds! This is a real advantage of working from Casa Rio. If I were at the office, they would probably knock me in the head! The effort will be worthwhile – this has turned out to be a very nice tool for quickly modifying the display of point cloud data.

I am pleased to report that sales of our Metric Mapping Kit are beginning to take off (pun intended). The AV-900 MMK is a bundle of all of the hardware and software needed to do local area metric mapping and volumetric analysis. We have now collected test data over a wide variety of sites with much effort expended on analyzing metric accuracy as a function of variable parameters such as control, RTK, stockpile toe definition and so forth. The results are truly stunning. sUAS technology will be a paradigm shift for this type of analysis.

I am also pleased to announce (there will be a press release in the next week or so) an option for filing FAA 333 exemptions for the MMK. If you purchase an AV-900 MMK, we (well, actually our attorney) will file your complete FAA 333 petition for a flat rate of US $1,295.

Speaking of stockpiles, the EXP release of LP360 (I promise we will release this by 15 July!) has a new point cloud task (PCT) for automatically digitizing the “toe” of a “clean” stockpile. Simply click a point on a pile and – voilà – a 3D stockpile toe! This tool is showing great potential and will be refined as we work on the final 2015.1 release. Our goal is to make stockpile collection as simple and repeatable as is possible. This function will be available in the Windows (“standalone”) release of the Standard/sUAS level of LP360 at EXP but will be in the ArcGIS extension by the time we deliver the final 2015.1 release.

Have a look at the latest iteration of LIDAR Server. You can view demonstration data sets by visiting www.lidarserver.com. We have replaced the legacy Silverlight client interface with an all new JavaScript browser. LIDAR Server is a great technology for hosting county-wide LIDAR data deliveries, making them available to constituents for viewing and ad hoc deliveries. For example, if a county engineer needs LIDAR data in the vicinity of a road intersection, she can just digitize an area of interest in the LIDAR Server client and download the dynamically created LIDAR data set to her workstation. We are doing a lot of work on LIDAR Server so there will be more to come!

Speaking of servers, our Reckon volumetric results management system has reached what might be called “version 1.0” ready for use. Reckon is a hosted service, running in Amazon Web Services (AWS). It is aimed at both owner-operators of surface mines as well as service providers. Right now you can experiment with Reckon by contacting us for an account. By the end of this month, we will have a new Reckon web site up with a live, on-line demo; stay tuned!

Keep enjoying your summer – see you in August!

Lewis Graham, CTO GeoCue Group