LiDAR Systems: A Guide to Understanding LiDAR Sensors Function and Capability

Let’s talk about the various metrics used when discussing Geo-MMS LiDAR sensor characteristics – and how do we sort through the overwhelming information provided by LiDAR sensor manufacturers?

LiDAR Systems: Sensor Range and Precision

The range of a LiDAR sensor is often one of the key characteristics highlighted in any information you receive regarding that sensor. The sensor range is tested under laboratory conditions and is presented as the maximal range at which detection can be captured. For effective mobile mapping, however, the effective range is significantly less than the nominal range stated in a manufacturer’s datasheet. To simplify, we group mobile mapping range into three categories: tactical, mid-range and long-range.

Accuracy and precision are two different statistical concepts.

  • Accuracy measures how close a range measurement (mean) is to the true distance of an object.
  • Precision measures how repeatable are identical consecutive measurements to each other.

For 3D mapping cases, precision is critical for point cloud ‘crispness’ in the form of clean corners and defined features. Lower precision can result in fuzziness in the generated point clouds, making it difficult to discern important features.

LiDAR Systems: Sensor Ruggedness

The Ingress Protection Code (IEC 60529) provides guidance on a sensor’s resistance to ingress from particles such as dust and water.

  • The first digit represents resistance to dust (1-6) – 6 is the most resistant
  • The second digit represents resistance to water (1-9) – 9 is the most resistant
  • Spinning sensors generally have the highest IP ratings (IP67 to IP69K)
  • Raster scanning systems will typically have a slightly lower rating (IP65 to IP67)

Figure 1. Ingress Protection Rating Guide

Figure 1. Ingress Protection Rating Guide

LiDAR Systems: Sensor Architecture & Laser Type

In this series of blogs, we have broadly separated UAV-LiDAR sensors into two groups – spinning LiDAR and raster scan LiDAR systems. The system architecture of these two categories are fundamentally different. Raster scan sensors are the only kind that can be considered truly survey-grade. They were designed and developed for this function. These sensors possess higher return capabilities, giving them a distinct advantage for LiDAR mapping over the foliage.

Spinning LiDAR sensors were primarily developed for autonomous vehicle applications and robotics. Their usefulness also extends to affordable 3D mobile mapping. Installation on an autonomous vehicle illustrates the need for the highest level of Ingress Protection – to protect from power washing and exposure to harsh elements. The advantage of being lightweight and low-cost is attractive to many pursuing UAS-LiDAR projects. However, those achieving survey-grade accuracy and precision should always opt for a raster scan LiDAR sensor.

Spinning sensors typically utilize Edge Emitting Laser Diodes (EELD) while Raster Scan systems utilize a Fiber-Pulsed Laser. EELD lasers are a mature technology and offer reliable all-around performance. These sensors typically operate at the ~905 nm wavelength. Sensors with 600-1,000 nm wavelengths are not suitably safe for human eyes as the laser light can be focused and absorbed by the eye. As a result, sensors that operate in this wavelength have their power limited or capped to be in-line with safety regulations.

Fiber lasers are generally more expensive, power-hungry and operate at the ~1,550 nm wavelength. A 1,550 nm wavelength is eye-safe, so sensors with this wavelength can operate at high/full power. Because of this, they can be utilized at higher AGL altitudes. By allowing the operation of more powerful lasers without violating eye-safety regulations, it has a positive impact on the maximum achievable range. Sensor manufacturers who utilize 1,550 nm do so in order to pump a lot of power into the sensor.

Choosing the correct LiDAR sensor for your application and budget is an essential stage in setting up your UAV or mobile mapping system. For any questions relating to LiDAR sensors or your project in general, Request more Information. Already know the ideal sensor for your use-cases? Request a Quote today!

Figure 2. Geo-MMS Point&Pixel system integrated with the Teledyne Optech CL-360 system.

Figure 2. Geo-MMS Point&Pixel system integrated with the Teledyne Optech CL-360 system

In case you missed Part 1 of LiDAR Systems and Sensors of this series, it can be found here.

Originally published with Geodetics


Wildfires consume millions of acres of land every year, destroying homes, communities, and infrastructure across the United States and around the globe. Fire seasons are getting longer, and fires are increasing in intensity as droughts increase and the effects of climate change become more pronounced. The economic and environmental impacts are devastating and costly, necessitating a closer study of wildfire causes and risk mitigation to better assess and minimize risk and damage.

With recent advancements in remote sensing technology, LiDAR data collection, and fire science, more and more solutions are becoming widely available for fire tracking and containment. In fact, AEVEX Aerospace collaborates with several fire departments – including the Orange County Fire Authority (OFCA) – on fire programs that enhance intelligence, surveillance, and reconnaissance technology for wildfires throughout the United States. The program lead for OFCA currently operates a FLIR 380-HD and an Overwatch TK-9 sensor onboard a King Air 200 aircraft. This system transmits video, maps, and images of burning fires in real-time to fire fighter control and command centers. Real-time analysis puts accurate information in the hands of crews and fire management personnel. For more information about this technology, please click here.

Considering the cost of operating such systems from manned aircraft, Geodetics (an AEVEX Aerospace company) is working on a UAV prototype equipped with several tightly coupled onboard sensors, allowing for data acquisition and modeling techniques to measure relevant variables for wildfire management and risk assessment. This UAV prototype is ideal for targeted wildfire management in reasonably spotted areas, and offers much lower overhead costs when compared to manned aircraft missions. An assessment of susceptible areas could produce models that characterize wildfire likelihood, intensity, and impacts. Measurable variables including topography, vegetation health (moisture content), land cover types, and relative temperature can aid in predicting the severity and intensity of wildfires. The specialized Geo-MMS (Mobile Mapping System) payload used in this study includes a long-range LiDAR sensor, RGB, multispectral, and thermal imaging sensors. The figure below outlines the system prototype with integrated sensor performance:

Geodetics’ UAV-based wildfire management system

Figure 1. Overview of the sensors employed in Geodetics’ UAV-based wildfire management system.

Geodetics approach to wildfire risk assessment involves a hybrid drone that can remain airborne for several hours during data collection. The proposed payload includes the long-range Optech CL-360XR LiDAR sensor, A growing multispectral imaging sensor, Sony α7R II RGB camera, and FLIR longwave infrared thermal sensor. Note that these sensors can be customized to meet different project demands and requirements.

These onboard sensors together provide accurate and reliable data for a wildfire risk prediction model. LiDAR data provides high-resolution, fine-scale measurements that can extract biophysical features of vegetation as well as creation of DTM/DEMs. The sensor measures the structural data, giving accurate and reliable information about topography, slope, and vegetation characteristics: canopy height, crown length, tree basal area, density, diameter, and dead tree density.

Originally published by Geodetics

How one AEVEX employee’s work on real-time mapping would influence Google Earth

When Don Burns was writing visualization software for NASA and supporting flight simulator development for McDonnell Douglas, Boeing, and the Army, he never thought that what started out as real-time 3D graphics in the ‘90s would one day turn into the technology that helps to power Google Earth. 30 years later, Google Earth is a household term, an application that is used for everything from education to remote sensing research, resource management, and even predicting disease outbreaks.

National Inventor’s Day was February 11. It’s a day to recognize the unique accomplishments that continue to further technology, science, and human knowledge. It’s also the perfect venue to recognize contributions – small and large – that influence individual careers and the world around us.

Don is currently part of the AEVEX Aerospace team as a Senior Software Architect, writing code for AEVEX’s Sierra software. Sierra is a 3D globe that integrates maps, aerial imagery, and terrain to help provide real-time data from air to ground for mission-critical and disaster mitigation situations.

“Many years ago, when I worked on the technology that would go on to become Google Earth, I never thought it would have such a significant impact on how I approached my work in the future,” said Don. “The relationship between real-time visualization, aerospace and flight simulation, and 3D mapping has evolved significantly over the years, and I’m proud to be a part of that.”

From coding for NASA, to graphical interactive terrain databases, and even to developing virtual reality attractions for Disney, Don’s career has followed a trajectory of innovation and out-of-the-box thinking.

“During my time at Silicon Graphics we were working on programming that modeled a 3D earth, including the capability to zoom in and see details. This was heavily based on the fast-moving, high-definition, real-time graphics we used in flight simulation and other forms of visualization, and our terrain databases evolved into whole earth databases.” Don said. “That work followed us into a start-up named Keyhole and a product named EarthViewer, which got the attention of Google. Google purchased Keyhole and EarthViewer became Google Earth. Google Earth enthusiasts may recognize the data format KML – in which the ‘K’ still retains the reference to Keyhole”

Just as Google Earth needed high-quality, interactive 3D graphics, first responders and military personnel also require high-performance graphics in real or near real-time frames. More than traditional mapping, digital interactive cartography helps track troops, mitigate natural disasters like fires and oil spills, and gain a detailed understanding of relevant topography.

“Working on the Sierra software for AEVEX has been very rewarding. By integrating interactive databases with live cameras, we are able to offer augmented reality that adds much-needed detail to surveillance, firefighting, tracking, and disaster mitigation,” said Don. “This technology is all airborne, so this robust technology also has to be optimized for size, weight, and power.

“I enjoy using my diverse coding and simulation background to continue the evolution of interactive mapping,” he added. “It’s exciting to see what’s next in the world of computer vision.”

And what is next?

“It’s all about artificial intelligence, specifically computer vision,” Don said. “This enables computers and computer systems to gather meaningful information from visual input like images, videos, and maps. This is very compute-intensive, so as the need for this type of data grows, so does the need to evolve technology, graphics, and databases. But the more effort we put into inventing the future of mapping and simulation, the safer we can make the world. I imagine a future where fires are mapped, tracked, and predicted from the start. Where oil spills are quickly contained with predictive data. Where troop movements are accurately tracked and assessed.”

Inventions – both big and small – and contributions – both long- and short-term – will guide the future of these technological developments. Here’s to many more innovative ideas!

LiDAR Sensors for UAV / Drone Application

When selecting the right LiDAR you must consider the various parameters that define the performance of the LiDAR sensor. These parameters are summarized in the table below.

LiDAR Specifications Listed by Range

LiDAR Specifications Listed by Range

If you are an expert in LiDAR scanning, you can probably stop here. Otherwise, this article provides assistance in selecting the right LiDAR sensor for your application. There are several approaches to narrowing down the right LiDAR sensor. The first, and probably the easiest, is basing your decision on price. Another approach is based on considering factors such as operational environment, the weight of the sensor which impacts the flight duration, etc. There is nothing wrong with these approaches; however, here, we tackle this question from a different perspective.  In our approach, we consider flight altitude, required resolution and the application. The reason we drop accuracy from consideration is that in most cases, regardless of the LiDAR sensor, the desire is the highest accuracy.

Flight Altitude

In LiDAR mapping, the flight altitude is a key parameter in picking the appropriate sensor. If you can fly below 60m AGL, the tactical-range sensors are appropriate.  For altitudes higher than 60m, you must consider either mid-range or long-range LiDAR sensors as shown below.




The projection of a single laser beam onto the ground yields its resolution. The resolution is a function of the flight altitude, scan rate (frequency) and the angular resolution. A minimum resolution for the tactical/mid-range LiDAR is about 5-10cm@30m; while for long-range LiDAR’s is about 2-3@100m.  The frequency and angular resolution of the tactical-range and the mid-range LiDAR sensors are in the same range. Thus, if you need a similar or higher resolution at higher altitudes, then you must use a long-range LIDAR sensor.




The last parameter to be considered is the application. For applications, such as agriculture, canopy classification, forestry and forest planning, topography (DEM/DTM) and archaeology, where canopy penetration and ground returns are required, the number of channels and returns provided by the sensor will help narrow your selection. Velodyne LiDAR sensors currently offered with Geo-MMS provide up to two-returns, the Quanergy M8 LiDAR provides up to three-returns, and the Teledyne-Optech LiDAR sensors provides up-to 4-returns. For applications such as powerline and transmission inspection, railway infrastructure inspection, BIM/architecture, query/open-pit/mining, etc. the number of returns is less important.

The Geo-MMS product suite is available with a wide range of LiDAR sensors. We classify LiDAR sensors into three categories: tactical, mid and long-range as illustrated below:

Supported LiDAR sensors in the tactical-range, mid-range, and long-range scanning

Supported LiDAR sensors in the tactical-range, mid-range, and long-range scanning

Originally published by Geodetics

Road Surveying using Geodetics’ Geo-MMS LiDAR Suite of Products

The purpose of this blog – the second in a two-part series – is to continue to explain the capabilities of Geo-MMS LiDAR payloads for road surveys and highway scanning. We will explore the wider workflows surrounding the digital mapping of transport infrastructure and associated assets. In case you missed Part 1 of Road Surface Mapping with LiDAR, it can be found here.

Figure 1. Geo-MMS ground-vehicle mounting assembly with 16-channel tactical-range LiDAR sensor.

Figure 1. Geo-MMS ground-vehicle mounting assembly with 16-channel tactical-range LiDAR sensor.

Expected increases in US infrastructure spending over the next few years will undoubtedly incorporate heavy capital investment in new transport projects. However, what is arguably more important is the maintenance of existing transport infrastructure. Accumulated road damage carries a high cost to address. Oftentimes, as soon as damage in one location is addressed, a new problem area is identified. Continuous ‘damage repair cycles can be avoided by identifying areas of weakness in the network and addressing them before the damage grows to the extent where manned crew deployment is needed to address the issue, causing disruption to local traffic and increasing costs further. With a lack of high-quality data, inconsistencies in damage reporting, and an overall lack of adequate prioritization, it can be difficult to provide an integrated solution.

A strategic documentation plan for all road networks (preventative maintenance) is more economically attractive than the passive approach often taken by many transport planning departments. 3D scans of the full transport environment feed into other important infrastructure inspections, including road surveys that include assessment of pavement conditions, bridge inspection, roadside powerline assessment, etc. Transport planners need to work smarter rather than harder by leveraging the available technology, achieving more results with less manpower.

LiDAR scan data can be enhanced by spherical imaging of the surrounding landscape in a 360° frame, the same frame a LiDAR sensor operates within. Ladybug, an RGB camera designed for mobile mapping, captures spherical 360° panoramic views of the environment. It also merges images from six internal cameras. Using the Geo-MMS Navigator, images from all six Ladybug cameras are geo-tagged, allowing geolocation features to be registered in the images. The combination of LiDAR and spherical imaging provides a powerful tool for powerline and transmission line monitoring, bridge inspection, road surveys, etc. To facilitate this multi-sensor integration, Geodetics provides a mounting system specifically for this setup, as shown in Figure 2. This bracket has two degrees of freedom such that it can be mounted on any vehicle regardless of orientation and height.

Figure 2. This bracket has two degrees of freedom such that it can be mounted on any vehicle regardless of orientation and height.

Figure 2. This bracket has two degrees of freedom such that it can be mounted on any vehicle regardless of orientation and height.

This provides a second dataset, used for detailed visual assessment of features of interest identified through LiDAR, including road surface markings and pavement cracks. Full panorama images and 3D scan data, coupled with simultaneously collected precise position, orientation, and timing data combining produce multifaceted, high-resolution maps. These maps provide walk-through simulations – an innovative solution – one which will become a more common practice in the future as the value of manual inspections gradually decreases and digital inventories are required for documenting all infrastructure (and associated assets). The collected data is most valuable when assimilated into existing infrastructure databases which help define strategies for future developments, maintenance schedules, and transport planning, as exemplified in BIM.

Actionable Data

With all Geo-MMS LiDAR payloads come seamless capability to switch between ground-vehicle and UAV platforms. Additional opportunities arise when integrating ground-captured and UAV-captured point cloud data in the same viewing frame. Within the realm of road-surface mapping, these applications can extend to disaster assessment after natural disasters, aging facility inspection, progress inspection on-road remodeling projects, etc.

Figure 3. Raw LAS point cloud file captured from a car-mounted Geo-MMS LiDAR payload.

Figure 3. Raw LAS point cloud file captured from a car-mounted Geo-MMS LiDAR payload.

While terrestrial laser scanning can produce superb accuracy, obvious limitations are the duration of set-up and scan time for a relatively small section of roadway. Coupled with the additional labor hours needed to perform this work, accuracy may be high, but at the expense of time, safety and cost-efficiency. A combination of terrestrial, mobile, and UAV data may be the most comprehensive option, but the majority of the workload can be handled by a mobile scanning device. This can be installed on the rear of a car, SUV, ATV, pickup truck, or any other vehicle robust enough to support the temporary installation. The popularity of mobile scanning has increased where software developers have focused on this specific niche and application – TopoDOT software being one such popular example. Software options can automatically create cm-level road surface models, identify and export curbs, railings, utility poles, encroaching vegetation, and other features of interest in the transport infrastructure which can be extracted for further analysis in GIS and CAD programs. Meshing including TINs (Triangulated Irregular Networks) can be performed using widely employed automated algorithms that generate 3D statistics regarding the shape, volume, and frequency of structural anomalies.

What many are seeing across the industry are specialized scanning systems developed specifically for the goal of road/highway maintenance across transport and infrastructure industries. As pioneers in Defense-grade navigation technology and sensor integration, Geodetics is the trusted source for cost-effective and customized LiDAR mapping solutions in the air, on land, and at sea.

Originally published by Geodetics

Leveraging your Geo-MMS for Detailed Road Surface & Infrastructure Modeling

Following the economic downturn experienced in 2020, it is expected the US will significantly increase investment over the next number of years into the country’s infrastructure (e.g. roads, bridges, etc.), becoming a key catalyst for growth regeneration in the US and global economies. With this in mind, this blog (first in a two-part series) will focus on the use-cases for leveraging Geodetics’ Geo-MMS LiDAR payloads for road surface mapping applications including identification of small cracks, potholes and road-surface anomalies.

A strategic documentation plan for all road networks is more economically attractive than implicit ignorance of the situation. 3D scans of the full transport environment feed into other important infrastructure inspections, including assessment of pavement conditions, bridge monitoring/inspection, roadside powerline assessment, and more.

Geodetics’ customers have leveraged the power of their LiDAR mapping systems for use on ground-vehicles for many years. These include cars, all-terrain vehicles (ATVs) and rail-modified platforms (as shown below). Learn more about the applications of rail-modified platforms from a previously published blog. Geodetics has developed ground-vehicle mounting assemblies designed to accommodate all Geo-MMS LiDAR payloads. These mounting brackets are designed to be flexible, like the payloads they carry, and can be installed on virtually any moving vehicle.

Figure 1. Survey-grade raster scanning LiDAR sensor, installed on a modified Geo-MMS ground-vehicle mounting assembly (left - Access Surveyors) and Geo-MMS 32-channnel tactical-range LiDAR with 360° spherical imaging camera integrated for off-road mapping (right – Precise Sensing).

Figure 1. Survey-grade raster scanning LiDAR sensor, installed on a modified Geo-MMS ground-vehicle mounting assembly (left – Access Surveyors) and Geo-MMS 32-channnel tactical-range LiDAR with 360° spherical imaging camera integrated for off-road mapping (right – Precise Sensing).

Mobile LiDAR Scanning in Practice

For road surface mapping, the sensor should be elevated and orientated such that it scans the road surface and the surrounding terrain/features in a full 360° frame. While all LiDAR sensors integrated by Geodetics can be integrated on ground-based mobile platforms, recent demand has specifically been targeted towards our high-end survey-grade raster scanning LiDAR sensors. The unrivaled precision, accuracy, and point density of these sensors make them ideally suited to capturing and documenting cracks and distresses on roads, which may not be captured by spinning analog LiDAR sensors. The heavier weights of these sensors are not a concern for ground-based mobile mapping, unlike their effects on flight time when mounted to a UAV. The cm-level resolution offered by these sensors is frequently sought by those aiming for the highest accuracy possible for road surface mapping.

One of the many advantages of mobile road scanning, as stressed previously, is the rapid and efficient identification of small cracks, potholes and road surface anomalies. The point density achieved with high-performance systems allows the user to scan at regular highway speeds, allowing vast datasets to be collected in a fraction of the time as traditional methods, all while improving positional data accuracy and increasing safety. Reflectivity characteristics of road markings, road signs, and railings along transport routes allow them to become pronounced when viewing the collected LiDAR datasets by the intensity of the raw LAS files.

Figure 2 below displays data captured with the CL-360 LiDAR from Teledyne Optech. The key to collecting such high-quality point clouds lies in optimizing a balance between sensor scan line frequency, PRF (Pulse Repetition Frequency), and vehicle speed. For the highest point density, we can select the maximum capabilities of 250 scan lines per second and a 500 kHz PRF. Vehicle speed will then ultimately determine LiDAR point density. Using the full FOV available, we can detect all features of interest on the road and surrounding terrain.

Figure 2. Raw LAS point cloud file captured from a car-mounted survey-grade payload.

Figure 2. Raw LAS point cloud file captured from a car-mounted survey-grade payload.

Actionable Data

Assessment records of roads and their associated assets are increasingly needed in formats native to GIS and CAD programs. LiDAR scanning introduces unmatched levels of dimensional accuracy, scan feature richness and efficiency of data capture. Capturing LiDAR scan data along road and rail transportation corridor routes can generate terabytes of point cloud data. An efficient process to manage the captured data, assess its quality and extract the desired information is necessary to seamlessly feed downstream planning, design, engineering and construction operations.

Captured scan data can be meshed in post-processing software to produce a 3D structure of any detected anomalies. The 3D nature of the scan data also allows point cloud sections to be inverted and viewed from the bottom, something not possible with imagery alone. A basic example is shown in Figure 3 below, with meshing performed with open-source point cloud processing software.

Figure 3. Example showing a basic mesh of point cloud data to highlight road-surface features.

Figure 3. Example showing a basic mesh of point cloud data to highlight road-surface features.



Figure 4. Raw LAS intensity channel and feature signature.

Figure 4. Raw LAS intensity channel and feature signature.


In part two of this blog series, we will dive further into the role of mobile LiDAR scanning on road and infrastructure projects along with the role RGB imaging (as seen in Figure 1) plays in the overall mapping ecosystem and workflow.

Originally published by Geodetics

AEVEX Aerospace, a full-spectrum provider of innovative aircraft, remote sensing, and analysis solutions to government and commercial clients, announced today that it has received two awards under the General Services Administration (GSA) 10-year ASTRO contract. This contract has a $2B ceiling and is administered through GSA’s Federal Systems Integration and Management (FEDSIM) Center.

“As a company that performs end-to-end aircraft and sensor system design, provision, integration, operations, sustainment, and data analysis – packaged or as a service anywhere in the world – ASTRO provides our clients a superb vehicle to support their requirements,” said Brian Raduenz, AEVEX’s Chief Executive Officer, “Key support on this contract includes leveraging the AEVEX Test & Training Range in New Mexico, our AS9100 rapid prototyping services, and FAA/EASA/TCCA Part 145 certified repair station for design, engineering, and integration of sensors and special mission aircraft, manned and unmanned.”

“We are extremely excited to add this best-in-class contract vehicle to our portfolio as it will significantly improve our ability to support innovative R&D and data management efforts across aviation for our warfighters operating in all domains” added Brian.

Clients can access ASTRO through GSA FEDSIM and any GSA Office of Assisted Acquisition Services Contracting Officer granted a Delegation of Procurement Authority.

About AEVEX Aerospace

AEVEX Aerospace, headquartered in Solana Beach, California, supports the U.S. national security mission and partner nation needs around the world by providing full-spectrum aviation, remote sensing, and analysis solutions.  The company’s capabilities include custom design and engineering, sensor integration and sustainment, aircraft modification and certification, mission operations services, advanced intelligence data processing, exploitation, and dissemination solutions, and tailored hardware and software mission-system tools.  AEVEX uses agile and customized approaches to rapidly define, develop, and deliver specialized solutions for airborne special mission needs for the US Government, partner nations, and commercial businesses.  AEVEX has major offices in California, Massachusetts, New Mexico, North Carolina, Ohio, and Virginia.


Calais LeBlanc

Marketing Business Manager

(858) 704-4125

Understanding the Role and Impact of Each on your UAV-LiDAR Missions

This blog (second in a two-part series) continues our explanation of the variables which influence boresight calibration and strip alignment when flying drone-based LiDAR missions. If you missed the first part of this series, we recommend you check it out before continuing with this blog.

LiDAR boresight calibration aims to calibrate the LiDAR boresight angle, as demonstrated in Figure 1 below. Boresight calibration addresses the issue of alignment of the LiDAR and the IMU body frames – the correlation between the LiDAR boresight angles and the INS attitude. Some specialized third-party software can decorrelate these two error sources and estimate/calibrate the LiDAR boresight angles. However, some software developers list these capabilities but simply lump all errors together and employ automated algorithms to align the point clouds.

Figure 1. LiDAR boresight calibration – before (left) and after (right

Figure 1. LiDAR boresight calibration – before (left) and after (right

Strip alignment (also known as strip adjustment) can help improve the consistency between several adjacent UAV-LiDAR flight lines. Strip alignment workflows utilize algorithms which align the LiDAR point clouds from one strip to another without necessarily knowing the source of the error. Certain LiDAR post-processing software options will have built-in capabilities for strip alignment. These ensure alignment of the point cloud based on common features captured in overlapping data sections and computing the geometric transformation between these datasets. Addressing boresight error is typically the first step in the strip alignment workflow.

Strip alignment procedures can be used if the system has undergone boresight calibration but there are remaining residual discrepancies in overlapping areas that cannot be explained by other sources of error. In other words, strip alignment is only necessary if there are still minor misalignments noticeable after the LiDAR point cloud has been georeferenced. Factors such as flight altitude profile, manual flight, wind or other environmental conditions can impact the flight trajectory in subtle ways that keep the point clouds from aligning as intended.

As another example, let’s consider flying a Geo-MMS LiDAR payload over a construction site. We fly the mission, collect and process the data and visualize it in our 3D point cloud viewer. Upon visual inspection, we see two lines in the point cloud for a feature we know should only be represented by one line (e.g., edge of a building roof). This may have been the result of poor flight planning and execution, inexperienced piloting or flying in harsh weather conditions. If you are on a tight schedule, strip alignment software can make a positive difference by offering different options to improve the consistency between flight lines. Different algorithms can correct the position and angles of the drone across the mission duration to align the data as it was intended to be collected originally. Post-processing the data in this fashion can save time by avoiding a repeat mission.

Figure 2. Misalignment of adjacent LiDAR strips intersecting a feature of interest (left) and with automated strip alignment algorithms employed (right).

Figure 2. Misalignment of adjacent LiDAR strips intersecting a feature of interest (left) and with automated strip alignment algorithms employed (right).

In the example described above, the alternative to owning strip alignment software would be a repeat mission (correct planning and execution in better flying conditions). Repeating a mission as a one-off is fine, but users who regularly have to re-fly missions may stand to benefit from a dedicated point cloud strip adjustment software.

Software such as BayesStripAlign from BayesMap Solutions focus on automated strip alignment for LiDAR point cloud data from UAVs and manned aircraft. If correct flight planning and execution are adhered to, it is rarely necessary to add this level of capability to UAV-LiDAR post-processing, given the onboard INS is of the highest quality. Specialized software can be useful when performing work at high altitudes from manned aircraft with long-range sensors such as the Teledyne Optech CL-360XR. For LiDAR pilots who are forced to operate in harsh conditions (something we advise against), a strip alignment software can help correct for the environmental impacts exerted on your data collection methods. However, most software have limited support for UAV-LiDAR operations. The most practical approach is to avoid operating the system in harsh weather, in which all sources of the aforementioned error sources will exceed normal ranges. To counter this, Geodetics developed a set of algorithms that can adaptively adjust the boresight angles for each individual LiDAR strip, eventually aligning all strips together. The figure below demonstrates system operation from aligning two sets of overlapping point clouds based on adjustment of the boresight angles. The algorithm converges rapidly and estimates accurate misalignment angles. Future developments will extend this algorithm to more generic cases of data collection scenarios in which the AOI is primarily featureless terrain.

Figure 3. Example taken from Geodetics’ boresight angle adaption software.

Figure 3. Example is taken from Geodetics’ boresight angle adaption software.

Geo-MMS customers can rest assured knowing that the proprietary Defense-grade inertial navigation technology developed by our navigation scientists and engineers is amongst the highest quality in the industry.

Originally published by Geodetics 

Understanding Two Critical Principles of LiDAR Data Acquisition

If you are familiar with any type of LiDAR data acquisition, you have likely stumbled across terms including ‘boresight calibration’ and ‘strip alignment’ (or strip adjustment). These terms relate to separate strategies for adjusting and aligning adjacent strips of aerial LiDAR data. In this blog (first in a two-part series), we will address why boresight calibration and strip alignment are important considerations for UAV-LiDAR missions. In part two of this series, we will provide practical information to the LiDAR end-user which further addresses and explains the points raised in this blog.

Boresight calibration and strip alignment are particularly prevalent for UAV-based LiDAR solutions such as Geo-MMS LiDAR. UAV-LiDAR is acquired through many parallel ‘strips’ of LiDAR which overlap adjacent strips slightly (< 10%). The purpose of boresight calibration is to correct for minute misalignments between adjacent strips noticeable in the raw point cloud.

Figure 1. Flat terrain, captured across seven parallel UAV-LiDAR flight lines.

The main concerns for the LiDAR end-user are:

  • Do we need data processing procedures to adjust our point clouds for possible misalignments?
  • Is additional software needed to make these adjustments?

But first, we should address what causes these misalignments in the first place. Once the error sources are identified, we can better understand the principles behind LiDAR boresight calibration and strip alignment. When looking at the error budget of any aerial LiDAR system, we experience both systematic and random errors. With correct system calibration, systematic errors are removed and random errors are minimized.

Systematic errors result in the systematic deviation of laser footprint coordinates. Integration of Geo-MMS LiDARor Point&Pixel payloads on your drone requires the axis of the scanning reference coordinate system and inertial platform reference coordinate system to be parallel. While physically mounting the payload to the UAV, it is not guaranteed that they will be parallel (human hands are not precision instruments). This results in what we term boresight error.

Figure 2 below summarizes the geometry of different components in the Geo-MMS system in a simplified vector-based visual. Once the LiDAR sensor fires a pulse measuring the range from the laser to the scanned feature, , we need three more observations to georeference the captured point:

  • The position, provided by the GPS () and represented in the PPK solution
  • The leverarm offset to transform the GPS position to the IMU center, marked as ()
  • The boresight shift to transfer the position from the IMU center to the LiDAR sensor center, marked as ()

Figure 2. Direct georeferencing geometry

From vector geometry, the global position of the point, can be determined by adding the sequences of four vectors:

In this geometry reconstruction, we consider the translation between the components. Besides translations, the rotation between these components is critical. Considering the rotational influence at different sensor frames, Equation (1) can be rewritten as:

  • represents the rotation matrix, reconstructed based on the roll, pitch, and yaw of Geodetics’ Navigator (INS)
  • represents the rotation matrix from the IMU to the LiDAR, represented as the LiDAR boresight angle

Now that we have covered the basic geometry, we can highlight the main contributory points that direct georeferencing has on practical LiDAR usage:

  • refers to the GPS-to-IMU leverarm This is a known calibrated value and is pre-configured with your Geo-MMS system.
  • refers to the IMU-to-LiDAR offset or LiDAR boresight shift. This too is a known calibrated value and is pre-configured with your Geo-MMS system.
  • refers to the LiDAR boresight angle. This too is pre-configured with your Geo-MMS system.
  • refers to the PPK solution of the GPS antenna.
  • refers to the INS-based rotation. This is determined by the INS (roll, pitch, yaw) and its accuracy is typically a function of the IMU grade, dual-GPS based heading and filter-tuning procedures.

The boresight error between the LiDAR sensor and the onboard GPS/INS coordinate system is the largest systematic error source in UAV-LiDAR. The laser footprint error produced as a result of boresight error is also impacted by flight altitude (AGL) and scan angle.

The main purpose of this geometry breakdown is to highlight the five most important variables which directly impact the quality of the LiDAR data generated. Among these variables, two cannot be pre-calibrated: PPK solution and INS attitude. Several algorithms are implemented in Geodetics’ LiDARTool software to improve the PPK and INS solutions including forward-backward PPK smoothing and rounding point coordinate values to fine resolution grids. However, leftover errors can still impact point cloud reconstruction quality in harsh environments.

In part two of this series, we will more closely examine the questions posed at the beginning of this blog, pertaining to the two main concerns of the LiDAR end-user.

Originally posted by Geodetics