Posts

LiDAR Systems: A Guide to Understanding LiDAR Sensors Function and Capability

Let’s talk about the various metrics used when discussing Geo-MMS LiDAR sensor characteristics – and how do we sort through the overwhelming information provided by LiDAR sensor manufacturers?

LiDAR Systems: Sensor Range and Precision

The range of a LiDAR sensor is often one of the key characteristics highlighted in any information you receive regarding that sensor. The sensor range is tested under laboratory conditions and is presented as the maximal range at which detection can be captured. For effective mobile mapping, however, the effective range is significantly less than the nominal range stated in a manufacturer’s datasheet. To simplify, we group mobile mapping range into three categories: tactical, mid-range and long-range.

Accuracy and precision are two different statistical concepts.

  • Accuracy measures how close a range measurement (mean) is to the true distance of an object.
  • Precision measures how repeatable are identical consecutive measurements to each other.

For 3D mapping cases, precision is critical for point cloud ‘crispness’ in the form of clean corners and defined features. Lower precision can result in fuzziness in the generated point clouds, making it difficult to discern important features.

LiDAR Systems: Sensor Ruggedness

The Ingress Protection Code (IEC 60529) provides guidance on a sensor’s resistance to ingress from particles such as dust and water.

  • The first digit represents resistance to dust (1-6) – 6 is the most resistant
  • The second digit represents resistance to water (1-9) – 9 is the most resistant
  • Spinning sensors generally have the highest IP ratings (IP67 to IP69K)
  • Raster scanning systems will typically have a slightly lower rating (IP65 to IP67)

Figure 1. Ingress Protection Rating Guide

Figure 1. Ingress Protection Rating Guide


LiDAR Systems: Sensor Architecture & Laser Type

In this series of blogs, we have broadly separated UAV-LiDAR sensors into two groups – spinning LiDAR and raster scan LiDAR systems. The system architecture of these two categories are fundamentally different. Raster scan sensors are the only kind that can be considered truly survey-grade. They were designed and developed for this function. These sensors possess higher return capabilities, giving them a distinct advantage for LiDAR mapping over the foliage.

Spinning LiDAR sensors were primarily developed for autonomous vehicle applications and robotics. Their usefulness also extends to affordable 3D mobile mapping. Installation on an autonomous vehicle illustrates the need for the highest level of Ingress Protection – to protect from power washing and exposure to harsh elements. The advantage of being lightweight and low-cost is attractive to many pursuing UAS-LiDAR projects. However, those achieving survey-grade accuracy and precision should always opt for a raster scan LiDAR sensor.

Spinning sensors typically utilize Edge Emitting Laser Diodes (EELD) while Raster Scan systems utilize a Fiber-Pulsed Laser. EELD lasers are a mature technology and offer reliable all-around performance. These sensors typically operate at the ~905 nm wavelength. Sensors with 600-1,000 nm wavelengths are not suitably safe for human eyes as the laser light can be focused and absorbed by the eye. As a result, sensors that operate in this wavelength have their power limited or capped to be in-line with safety regulations.

Fiber lasers are generally more expensive, power-hungry and operate at the ~1,550 nm wavelength. A 1,550 nm wavelength is eye-safe, so sensors with this wavelength can operate at high/full power. Because of this, they can be utilized at higher AGL altitudes. By allowing the operation of more powerful lasers without violating eye-safety regulations, it has a positive impact on the maximum achievable range. Sensor manufacturers who utilize 1,550 nm do so in order to pump a lot of power into the sensor.

Choosing the correct LiDAR sensor for your application and budget is an essential stage in setting up your UAV or mobile mapping system. For any questions relating to LiDAR sensors or your project in general, Request more Information. Already know the ideal sensor for your use-cases? Request a Quote today!

Figure 2. Geo-MMS Point&Pixel system integrated with the Teledyne Optech CL-360 system.

Figure 2. Geo-MMS Point&Pixel system integrated with the Teledyne Optech CL-360 system

In case you missed Part 1 of LiDAR Systems and Sensors of this series, it can be found here.

Originally published with Geodetics

A Guide to Understanding LiDAR Sensors Function & Capability

When we discuss Geo-MMS LiDAR sensor characteristics, there are many metrics and factors that go into understanding the oftentimes overwhelming information provided by LiDAR sensor manufacturers. This information is being provided so that our customers have a clear vision when it comes to selecting the optimal LiDAR sensor for their Geo-MMS product.

In this study, we will examine several LiDAR sensors supported by the Geo-MMS Family of Products, including sensors from Teledyne Optech, Quanergy and Velodyne.

Light Detection and Ranging (LiDAR) is just one type of active range measurement technology. Other active ranging technologies such as sonar and radar use sound and radio waves as their sensing methods respectively, whilst LiDAR utilizes short wavelength laser light for measuring the distance between the sensor and detected objects.

Figure 1. Typical Geo-MMS Point&Pixel system in flight (Velodyne Puck VLP-16 LiDAR)

Points Per Second

Points per Second (PPS) is one of the best metrics to gauge LiDAR system performance, as it multiplies three metrics together: vertical points, horizontal points and frame rate; described respectively here.

Figure 2. Calculation of Points per Second emitted by spinning analog LiDAR sensors.

Figure 2. Calculation of Points per Second emitted by spinning analog LiDAR sensors.

  • The channels or ‘vertical points’ of a LiDAR sensor is an unvarying number of laser beams, dependent on the sensor (and often indicated in the sensor name e.g. M8, VLP-16, etc.)
  • Frame rate is the rotation rate of the sensor (expressed as frequency, Hz)
  • The ‘Horizontal points’ can be calculated by dividing the horizontal field-of-view (FOV) by the horizontal angular resolution
  • Angular resolution and frame rate are typically provided as ranges in sensor datasheets. The lower angular resolution corresponds to the shorter frame rate, whilst the higher angular resolution corresponds to the higher frame rate.

Frame rate and horizontal points have a tradeoff but are independently configurable. See below an example from the Quanergy M8 Ultra LiDAR sensor.

Table 1. PPS calculation for the Quanergy M8 Ultra LiDAR.

These calculations and metrics are relevant for analog spinning LiDAR sensors, such as the Velodyne and Quanergy models Geodetics integrates with. For more information regarding single laser raster scanning LiDAR systems, please refer to the final section of this blog. Further information will also be presented in the second part of this blog series.

Returns

Each LiDAR pulse will reflect on many objects with different reflectivity properties over a typical UAV-LiDAR mission. An emitted laser pulse that encounters multiple reflection surfaces as it travels towards the target feature is split into as many returns as there are reflective surfaces. The first return is the most significant and is associated with the highest feature in the landscape such as a treetop or a road surface. Multiple returns are capable of detecting the elevations of several objects within the laser footprint of an outgoing laser pulse.

Figure 3. A single tree isolated from a Geo-MMS LiDAR solution. On the left, the points are colored by elevation. On the right, the points are colored by return number (red = first return; green = second return; pink = second-last return; white = last return)

For each return a LiDAR sensor is capable of capturing, the number of points actually reflected in each return decreases significantly. The large majority of points in any LiDAR system will be captured by the first return. Manufacturers who multiply their PPS value by the number of maximal possible returns provide misleading information. The return capability of LiDAR is important for certain applications, but its value should not be overestimated. When mapping over urban areas consisting primarily of building roofs and streets, additional returns offer little benefit. Multiple returns do offer an advantage when operating over vegetated areas, as the different returns penetrate through vegetation to capture accurate and dense ground point measurements beneath forested or vegetated areas – something that is fundamentally lacking in photogrammetry.

Takeaway: Multiple returns don’t increase your PPS linearly with the number of returns possible. Actual returns are based upon object reflectance.

Sensor Type | Beam Divergence

Geodetics’ long-range LiDAR option integrates a raster scanning LiDAR sensor which has a different sensor architecture compared to the aforementioned spinning sensors. These sensors utilize a single laser and a mechanically oscillating mirror. Points per second is a function of the scan speed (mirror oscillation) and Pulse Repetition Frequency (PRF). The Teledyne Optech CL-360 is a single laser LiDAR, whilst frame rate is expressed in the form of Pulse Repetition Frequency (PRF) – the number of repeating pulse signals over a specific time unit, measured in pulses per second.

Beam divergence represents the deviation of photons from a single beam emitted by a LiDAR sensor. Because each laser pulse is emitted in a cone shape, the intensity of the laser pulse decays exponentially over distance. This is one of the areas where the raster scan systems hold a distinct advantage. Beam divergence for the Teledyne Optech CL-360 is 0.3mrad, while spinning sensors diverge the laser energy by ~3mrad.  Because the total amount of pulse energy remains constant regardless of the beam divergence, a larger beam divergence leads the pulse energy to be spread over a larger area, leading to a lower signal-to-noise ratio. This is part of the reasoning for the recommendation to fly at a low AGL for UAV-LiDAR missions.

Figure 4. Figure displaying the beam divergence of a laser pulse as it is emitted from a sensor.

Figure 4. Figure displaying the beam divergence of a laser pulse as it is emitted from a sensor.

In the second part of this series, we will explore the differences between raster scan and spinning LiDAR sensors including the laser types used in each. We will also examine more closely other commonly used metrics including range, LiDAR precision and system ruggedness and durability. For any questions relating to LiDAR sensors or your project in general, please Request more Information.

Originally published by Geodetics

LiDAR Sensors for UAV / Drone Application

When selecting the right LiDAR you must consider the various parameters that define the performance of the LiDAR sensor. These parameters are summarized in the table below.

LiDAR Specifications Listed by Range

LiDAR Specifications Listed by Range

If you are an expert in LiDAR scanning, you can probably stop here. Otherwise, this article provides assistance in selecting the right LiDAR sensor for your application. There are several approaches to narrowing down the right LiDAR sensor. The first, and probably the easiest, is basing your decision on price. Another approach is based on considering factors such as operational environment, the weight of the sensor which impacts the flight duration, etc. There is nothing wrong with these approaches; however, here, we tackle this question from a different perspective.  In our approach, we consider flight altitude, required resolution and the application. The reason we drop accuracy from consideration is that in most cases, regardless of the LiDAR sensor, the desire is the highest accuracy.

Flight Altitude

In LiDAR mapping, the flight altitude is a key parameter in picking the appropriate sensor. If you can fly below 60m AGL, the tactical-range sensors are appropriate.  For altitudes higher than 60m, you must consider either mid-range or long-range LiDAR sensors as shown below.

LiDAR-Sensor-Flight-Altitude-1024x655-1

LiDAR-Sensor-Flight-Altitude-1024×655-1

Resolution

The projection of a single laser beam onto the ground yields its resolution. The resolution is a function of the flight altitude, scan rate (frequency) and the angular resolution. A minimum resolution for the tactical/mid-range LiDAR is about 5-10cm@30m; while for long-range LiDAR’s is about 2-3@100m.  The frequency and angular resolution of the tactical-range and the mid-range LiDAR sensors are in the same range. Thus, if you need a similar or higher resolution at higher altitudes, then you must use a long-range LIDAR sensor.

Resolution-and-Footprint-Size-1

Resolution-and-Footprint-Size-1

Application

The last parameter to be considered is the application. For applications, such as agriculture, canopy classification, forestry and forest planning, topography (DEM/DTM) and archaeology, where canopy penetration and ground returns are required, the number of channels and returns provided by the sensor will help narrow your selection. Velodyne LiDAR sensors currently offered with Geo-MMS provide up to two-returns, the Quanergy M8 LiDAR provides up to three-returns, and the Teledyne-Optech LiDAR sensors provides up-to 4-returns. For applications such as powerline and transmission inspection, railway infrastructure inspection, BIM/architecture, query/open-pit/mining, etc. the number of returns is less important.

The Geo-MMS product suite is available with a wide range of LiDAR sensors. We classify LiDAR sensors into three categories: tactical, mid and long-range as illustrated below:

Supported LiDAR sensors in the tactical-range, mid-range, and long-range scanning

Supported LiDAR sensors in the tactical-range, mid-range, and long-range scanning

Originally published by Geodetics

Road Surveying using Geodetics’ Geo-MMS LiDAR Suite of Products

The purpose of this blog – the second in a two-part series – is to continue to explain the capabilities of Geo-MMS LiDAR payloads for road surveys and highway scanning. We will explore the wider workflows surrounding the digital mapping of transport infrastructure and associated assets. In case you missed Part 1 of Road Surface Mapping with LiDAR, it can be found here.

Figure 1. Geo-MMS ground-vehicle mounting assembly with 16-channel tactical-range LiDAR sensor.

Figure 1. Geo-MMS ground-vehicle mounting assembly with 16-channel tactical-range LiDAR sensor.

Expected increases in US infrastructure spending over the next few years will undoubtedly incorporate heavy capital investment in new transport projects. However, what is arguably more important is the maintenance of existing transport infrastructure. Accumulated road damage carries a high cost to address. Oftentimes, as soon as damage in one location is addressed, a new problem area is identified. Continuous ‘damage repair cycles can be avoided by identifying areas of weakness in the network and addressing them before the damage grows to the extent where manned crew deployment is needed to address the issue, causing disruption to local traffic and increasing costs further. With a lack of high-quality data, inconsistencies in damage reporting, and an overall lack of adequate prioritization, it can be difficult to provide an integrated solution.

A strategic documentation plan for all road networks (preventative maintenance) is more economically attractive than the passive approach often taken by many transport planning departments. 3D scans of the full transport environment feed into other important infrastructure inspections, including road surveys that include assessment of pavement conditions, bridge inspection, roadside powerline assessment, etc. Transport planners need to work smarter rather than harder by leveraging the available technology, achieving more results with less manpower.

LiDAR scan data can be enhanced by spherical imaging of the surrounding landscape in a 360° frame, the same frame a LiDAR sensor operates within. Ladybug, an RGB camera designed for mobile mapping, captures spherical 360° panoramic views of the environment. It also merges images from six internal cameras. Using the Geo-MMS Navigator, images from all six Ladybug cameras are geo-tagged, allowing geolocation features to be registered in the images. The combination of LiDAR and spherical imaging provides a powerful tool for powerline and transmission line monitoring, bridge inspection, road surveys, etc. To facilitate this multi-sensor integration, Geodetics provides a mounting system specifically for this setup, as shown in Figure 2. This bracket has two degrees of freedom such that it can be mounted on any vehicle regardless of orientation and height.

Figure 2. This bracket has two degrees of freedom such that it can be mounted on any vehicle regardless of orientation and height.

Figure 2. This bracket has two degrees of freedom such that it can be mounted on any vehicle regardless of orientation and height.

This provides a second dataset, used for detailed visual assessment of features of interest identified through LiDAR, including road surface markings and pavement cracks. Full panorama images and 3D scan data, coupled with simultaneously collected precise position, orientation, and timing data combining produce multifaceted, high-resolution maps. These maps provide walk-through simulations – an innovative solution – one which will become a more common practice in the future as the value of manual inspections gradually decreases and digital inventories are required for documenting all infrastructure (and associated assets). The collected data is most valuable when assimilated into existing infrastructure databases which help define strategies for future developments, maintenance schedules, and transport planning, as exemplified in BIM.

Actionable Data

With all Geo-MMS LiDAR payloads come seamless capability to switch between ground-vehicle and UAV platforms. Additional opportunities arise when integrating ground-captured and UAV-captured point cloud data in the same viewing frame. Within the realm of road-surface mapping, these applications can extend to disaster assessment after natural disasters, aging facility inspection, progress inspection on-road remodeling projects, etc.

Figure 3. Raw LAS point cloud file captured from a car-mounted Geo-MMS LiDAR payload.

Figure 3. Raw LAS point cloud file captured from a car-mounted Geo-MMS LiDAR payload.

While terrestrial laser scanning can produce superb accuracy, obvious limitations are the duration of set-up and scan time for a relatively small section of roadway. Coupled with the additional labor hours needed to perform this work, accuracy may be high, but at the expense of time, safety and cost-efficiency. A combination of terrestrial, mobile, and UAV data may be the most comprehensive option, but the majority of the workload can be handled by a mobile scanning device. This can be installed on the rear of a car, SUV, ATV, pickup truck, or any other vehicle robust enough to support the temporary installation. The popularity of mobile scanning has increased where software developers have focused on this specific niche and application – TopoDOT software being one such popular example. Software options can automatically create cm-level road surface models, identify and export curbs, railings, utility poles, encroaching vegetation, and other features of interest in the transport infrastructure which can be extracted for further analysis in GIS and CAD programs. Meshing including TINs (Triangulated Irregular Networks) can be performed using widely employed automated algorithms that generate 3D statistics regarding the shape, volume, and frequency of structural anomalies.

What many are seeing across the industry are specialized scanning systems developed specifically for the goal of road/highway maintenance across transport and infrastructure industries. As pioneers in Defense-grade navigation technology and sensor integration, Geodetics is the trusted source for cost-effective and customized LiDAR mapping solutions in the air, on land, and at sea.

Originally published by Geodetics

Leveraging your Geo-MMS for Detailed Road Surface & Infrastructure Modeling

Following the economic downturn experienced in 2020, it is expected the US will significantly increase investment over the next number of years into the country’s infrastructure (e.g. roads, bridges, etc.), becoming a key catalyst for growth regeneration in the US and global economies. With this in mind, this blog (first in a two-part series) will focus on the use-cases for leveraging Geodetics’ Geo-MMS LiDAR payloads for road surface mapping applications including identification of small cracks, potholes and road-surface anomalies.

A strategic documentation plan for all road networks is more economically attractive than implicit ignorance of the situation. 3D scans of the full transport environment feed into other important infrastructure inspections, including assessment of pavement conditions, bridge monitoring/inspection, roadside powerline assessment, and more.

Geodetics’ customers have leveraged the power of their LiDAR mapping systems for use on ground-vehicles for many years. These include cars, all-terrain vehicles (ATVs) and rail-modified platforms (as shown below). Learn more about the applications of rail-modified platforms from a previously published blog. Geodetics has developed ground-vehicle mounting assemblies designed to accommodate all Geo-MMS LiDAR payloads. These mounting brackets are designed to be flexible, like the payloads they carry, and can be installed on virtually any moving vehicle.

Figure 1. Survey-grade raster scanning LiDAR sensor, installed on a modified Geo-MMS ground-vehicle mounting assembly (left - Access Surveyors) and Geo-MMS 32-channnel tactical-range LiDAR with 360° spherical imaging camera integrated for off-road mapping (right – Precise Sensing).

Figure 1. Survey-grade raster scanning LiDAR sensor, installed on a modified Geo-MMS ground-vehicle mounting assembly (left – Access Surveyors) and Geo-MMS 32-channnel tactical-range LiDAR with 360° spherical imaging camera integrated for off-road mapping (right – Precise Sensing).

Mobile LiDAR Scanning in Practice

For road surface mapping, the sensor should be elevated and orientated such that it scans the road surface and the surrounding terrain/features in a full 360° frame. While all LiDAR sensors integrated by Geodetics can be integrated on ground-based mobile platforms, recent demand has specifically been targeted towards our high-end survey-grade raster scanning LiDAR sensors. The unrivaled precision, accuracy, and point density of these sensors make them ideally suited to capturing and documenting cracks and distresses on roads, which may not be captured by spinning analog LiDAR sensors. The heavier weights of these sensors are not a concern for ground-based mobile mapping, unlike their effects on flight time when mounted to a UAV. The cm-level resolution offered by these sensors is frequently sought by those aiming for the highest accuracy possible for road surface mapping.

One of the many advantages of mobile road scanning, as stressed previously, is the rapid and efficient identification of small cracks, potholes and road surface anomalies. The point density achieved with high-performance systems allows the user to scan at regular highway speeds, allowing vast datasets to be collected in a fraction of the time as traditional methods, all while improving positional data accuracy and increasing safety. Reflectivity characteristics of road markings, road signs, and railings along transport routes allow them to become pronounced when viewing the collected LiDAR datasets by the intensity of the raw LAS files.

Figure 2 below displays data captured with the CL-360 LiDAR from Teledyne Optech. The key to collecting such high-quality point clouds lies in optimizing a balance between sensor scan line frequency, PRF (Pulse Repetition Frequency), and vehicle speed. For the highest point density, we can select the maximum capabilities of 250 scan lines per second and a 500 kHz PRF. Vehicle speed will then ultimately determine LiDAR point density. Using the full FOV available, we can detect all features of interest on the road and surrounding terrain.

Figure 2. Raw LAS point cloud file captured from a car-mounted survey-grade payload.

Figure 2. Raw LAS point cloud file captured from a car-mounted survey-grade payload.

Actionable Data

Assessment records of roads and their associated assets are increasingly needed in formats native to GIS and CAD programs. LiDAR scanning introduces unmatched levels of dimensional accuracy, scan feature richness and efficiency of data capture. Capturing LiDAR scan data along road and rail transportation corridor routes can generate terabytes of point cloud data. An efficient process to manage the captured data, assess its quality and extract the desired information is necessary to seamlessly feed downstream planning, design, engineering and construction operations.

Captured scan data can be meshed in post-processing software to produce a 3D structure of any detected anomalies. The 3D nature of the scan data also allows point cloud sections to be inverted and viewed from the bottom, something not possible with imagery alone. A basic example is shown in Figure 3 below, with meshing performed with open-source point cloud processing software.

Figure 3. Example showing a basic mesh of point cloud data to highlight road-surface features.

Figure 3. Example showing a basic mesh of point cloud data to highlight road-surface features.

 

 

Figure 4. Raw LAS intensity channel and feature signature.

Figure 4. Raw LAS intensity channel and feature signature.

 

In part two of this blog series, we will dive further into the role of mobile LiDAR scanning on road and infrastructure projects along with the role RGB imaging (as seen in Figure 1) plays in the overall mapping ecosystem and workflow.

Originally published by Geodetics

Understanding the Role and Impact of Each on your UAV-LiDAR Missions

This blog (second in a two-part series) continues our explanation of the variables which influence boresight calibration and strip alignment when flying drone-based LiDAR missions. If you missed the first part of this series, we recommend you check it out before continuing with this blog.

LiDAR boresight calibration aims to calibrate the LiDAR boresight angle, as demonstrated in Figure 1 below. Boresight calibration addresses the issue of alignment of the LiDAR and the IMU body frames – the correlation between the LiDAR boresight angles and the INS attitude. Some specialized third-party software can decorrelate these two error sources and estimate/calibrate the LiDAR boresight angles. However, some software developers list these capabilities but simply lump all errors together and employ automated algorithms to align the point clouds.

Figure 1. LiDAR boresight calibration – before (left) and after (right

Figure 1. LiDAR boresight calibration – before (left) and after (right

Strip alignment (also known as strip adjustment) can help improve the consistency between several adjacent UAV-LiDAR flight lines. Strip alignment workflows utilize algorithms which align the LiDAR point clouds from one strip to another without necessarily knowing the source of the error. Certain LiDAR post-processing software options will have built-in capabilities for strip alignment. These ensure alignment of the point cloud based on common features captured in overlapping data sections and computing the geometric transformation between these datasets. Addressing boresight error is typically the first step in the strip alignment workflow.

Strip alignment procedures can be used if the system has undergone boresight calibration but there are remaining residual discrepancies in overlapping areas that cannot be explained by other sources of error. In other words, strip alignment is only necessary if there are still minor misalignments noticeable after the LiDAR point cloud has been georeferenced. Factors such as flight altitude profile, manual flight, wind or other environmental conditions can impact the flight trajectory in subtle ways that keep the point clouds from aligning as intended.

As another example, let’s consider flying a Geo-MMS LiDAR payload over a construction site. We fly the mission, collect and process the data and visualize it in our 3D point cloud viewer. Upon visual inspection, we see two lines in the point cloud for a feature we know should only be represented by one line (e.g., edge of a building roof). This may have been the result of poor flight planning and execution, inexperienced piloting or flying in harsh weather conditions. If you are on a tight schedule, strip alignment software can make a positive difference by offering different options to improve the consistency between flight lines. Different algorithms can correct the position and angles of the drone across the mission duration to align the data as it was intended to be collected originally. Post-processing the data in this fashion can save time by avoiding a repeat mission.

Figure 2. Misalignment of adjacent LiDAR strips intersecting a feature of interest (left) and with automated strip alignment algorithms employed (right).

Figure 2. Misalignment of adjacent LiDAR strips intersecting a feature of interest (left) and with automated strip alignment algorithms employed (right).

In the example described above, the alternative to owning strip alignment software would be a repeat mission (correct planning and execution in better flying conditions). Repeating a mission as a one-off is fine, but users who regularly have to re-fly missions may stand to benefit from a dedicated point cloud strip adjustment software.

Software such as BayesStripAlign from BayesMap Solutions focus on automated strip alignment for LiDAR point cloud data from UAVs and manned aircraft. If correct flight planning and execution are adhered to, it is rarely necessary to add this level of capability to UAV-LiDAR post-processing, given the onboard INS is of the highest quality. Specialized software can be useful when performing work at high altitudes from manned aircraft with long-range sensors such as the Teledyne Optech CL-360XR. For LiDAR pilots who are forced to operate in harsh conditions (something we advise against), a strip alignment software can help correct for the environmental impacts exerted on your data collection methods. However, most software have limited support for UAV-LiDAR operations. The most practical approach is to avoid operating the system in harsh weather, in which all sources of the aforementioned error sources will exceed normal ranges. To counter this, Geodetics developed a set of algorithms that can adaptively adjust the boresight angles for each individual LiDAR strip, eventually aligning all strips together. The figure below demonstrates system operation from aligning two sets of overlapping point clouds based on adjustment of the boresight angles. The algorithm converges rapidly and estimates accurate misalignment angles. Future developments will extend this algorithm to more generic cases of data collection scenarios in which the AOI is primarily featureless terrain.

Figure 3. Example taken from Geodetics’ boresight angle adaption software.

Figure 3. Example is taken from Geodetics’ boresight angle adaption software.

Geo-MMS customers can rest assured knowing that the proprietary Defense-grade inertial navigation technology developed by our navigation scientists and engineers is amongst the highest quality in the industry.

Originally published by Geodetics 

Understanding Two Critical Principles of LiDAR Data Acquisition

If you are familiar with any type of LiDAR data acquisition, you have likely stumbled across terms including ‘boresight calibration’ and ‘strip alignment’ (or strip adjustment). These terms relate to separate strategies for adjusting and aligning adjacent strips of aerial LiDAR data. In this blog (first in a two-part series), we will address why boresight calibration and strip alignment are important considerations for UAV-LiDAR missions. In part two of this series, we will provide practical information to the LiDAR end-user which further addresses and explains the points raised in this blog.

Boresight calibration and strip alignment are particularly prevalent for UAV-based LiDAR solutions such as Geo-MMS LiDAR. UAV-LiDAR is acquired through many parallel ‘strips’ of LiDAR which overlap adjacent strips slightly (< 10%). The purpose of boresight calibration is to correct for minute misalignments between adjacent strips noticeable in the raw point cloud.

Figure 1. Flat terrain, captured across seven parallel UAV-LiDAR flight lines.

The main concerns for the LiDAR end-user are:

  • Do we need data processing procedures to adjust our point clouds for possible misalignments?
  • Is additional software needed to make these adjustments?

But first, we should address what causes these misalignments in the first place. Once the error sources are identified, we can better understand the principles behind LiDAR boresight calibration and strip alignment. When looking at the error budget of any aerial LiDAR system, we experience both systematic and random errors. With correct system calibration, systematic errors are removed and random errors are minimized.

Systematic errors result in the systematic deviation of laser footprint coordinates. Integration of Geo-MMS LiDARor Point&Pixel payloads on your drone requires the axis of the scanning reference coordinate system and inertial platform reference coordinate system to be parallel. While physically mounting the payload to the UAV, it is not guaranteed that they will be parallel (human hands are not precision instruments). This results in what we term boresight error.

Figure 2 below summarizes the geometry of different components in the Geo-MMS system in a simplified vector-based visual. Once the LiDAR sensor fires a pulse measuring the range from the laser to the scanned feature, , we need three more observations to georeference the captured point:

  • The position, provided by the GPS () and represented in the PPK solution
  • The leverarm offset to transform the GPS position to the IMU center, marked as ()
  • The boresight shift to transfer the position from the IMU center to the LiDAR sensor center, marked as ()

Figure 2. Direct georeferencing geometry

From vector geometry, the global position of the point, can be determined by adding the sequences of four vectors:

In this geometry reconstruction, we consider the translation between the components. Besides translations, the rotation between these components is critical. Considering the rotational influence at different sensor frames, Equation (1) can be rewritten as:

  • represents the rotation matrix, reconstructed based on the roll, pitch, and yaw of Geodetics’ Navigator (INS)
  • represents the rotation matrix from the IMU to the LiDAR, represented as the LiDAR boresight angle

Now that we have covered the basic geometry, we can highlight the main contributory points that direct georeferencing has on practical LiDAR usage:

  • refers to the GPS-to-IMU leverarm This is a known calibrated value and is pre-configured with your Geo-MMS system.
  • refers to the IMU-to-LiDAR offset or LiDAR boresight shift. This too is a known calibrated value and is pre-configured with your Geo-MMS system.
  • refers to the LiDAR boresight angle. This too is pre-configured with your Geo-MMS system.
  • refers to the PPK solution of the GPS antenna.
  • refers to the INS-based rotation. This is determined by the INS (roll, pitch, yaw) and its accuracy is typically a function of the IMU grade, dual-GPS based heading and filter-tuning procedures.

The boresight error between the LiDAR sensor and the onboard GPS/INS coordinate system is the largest systematic error source in UAV-LiDAR. The laser footprint error produced as a result of boresight error is also impacted by flight altitude (AGL) and scan angle.

The main purpose of this geometry breakdown is to highlight the five most important variables which directly impact the quality of the LiDAR data generated. Among these variables, two cannot be pre-calibrated: PPK solution and INS attitude. Several algorithms are implemented in Geodetics’ LiDARTool software to improve the PPK and INS solutions including forward-backward PPK smoothing and rounding point coordinate values to fine resolution grids. However, leftover errors can still impact point cloud reconstruction quality in harsh environments.

In part two of this series, we will more closely examine the questions posed at the beginning of this blog, pertaining to the two main concerns of the LiDAR end-user.

Originally posted by Geodetics