Showing posts with label 3D point clouds. Show all posts
Showing posts with label 3D point clouds. Show all posts

Monday, August 4, 2014

Visualization of Canal Cross-Section using Data Acquired from Teleoperated Boat

This work is to be presented in Asian Conference of Remote Sensing (27-31 Oct 2014).

This work is part of a research project, the objective of which is to provide its users, mainly flood management officers, a better understanding of canal or waterway profiles that allow them to develop an effective water diversion plans for flood preparedness.

This study directly continues from our recently-published work, 3D Reconstruction of Canal Profiles using Data Acquired from Teleoperated Boat, which focuses on presenting 3D image of canal profiles using laser data, depth data and geolocation data acquired from the teleoperated boat. In this work, we focus more on presenting cross-section images of the surveyed canals.

Ideally, GPS, IMU, 2D laser scanner and depth sounder should be installed on the boat at the same position, preferably the centroid of the boat. Since all of these sensors cannot share the same position due to their size, lever arm offsets must be included in the computation. Instead of using the offsets directly, we derive them by asking the users to input the positions of IMU (O_BSC), 2D laser scanner (O_LCS) and depth sounder (O_DCS) which are specified with respect to the position of GPS (O_GCS), as illustrated in the figure below.


The application to reconstruct canals in 3D and visualize the cross-sections of canals was developed in the C/C++ language. We used the OpenGL library to facilitate graphical image rendering, and used the Qt library to create graphical user interface that allows the users to interact with the application and adjust the visualization results. This section discusses the input data and the process to convert them into georeferenced 3D points. We explain the data structure and the detail of creating cross-section images. The general system architecture and the point cloud coloring method were discussed in our recent paper (Tanathong et al., 2014).

The application requires the GPS/IMU data in conjunction with either laser data and depth data. The format for each data file used in this work is presented as the below figure. Note that these three sources of data are synchronized by acquisition time.


The GPS/IMU records must be stored in an internal storage, defined as an array of GPSIMU_RECORDs, such that the application can immediately retrieve these data for drawing the trajectory line of the boat to aid the interpretation of visualized canal scenes. On the contrary, the laser and depth data are not kept as raw inputs but converted into 3D points and stored into an array of 3DPOINTs to be visualized as 3D point clouds. The data structure are presented below:


The data processing procedure is summarized as the below figure. Note that this processing procedure intuitively sorts the 3D points in the point cloud storage by the acquisition time.


Prior to talking about the visualization of canal cross-section, we have to understand the term projection and unprojection. Plotting onto the display screen the 3D points retrieved from the 3D point storage presents the image of canal profiles in 3D. Although these 3D points are said to be presented in 3D, since the display screen is flat and has only two dimensions, they, in fact, are positioned as 2D points on screen. The process that converts the 3D coordinates into their corresponding 2D positions on a flat screen is known as projection (consult Shreiner et al. (2013) for more detail), while the reversed process is unofficially referred to as unprojection.

In order to visualize a cross-section image of the canal reconstructed from the data acquired from the teleoperated boat, the users browse along the 3D canal by customizing the viewing parameters, then mark a good position by clicking on the display screen. The user-click action unprojects the 2D screen coordinate at the clicked position (xʹ,yʹ) into its corresponding 3D coordinate (X,Y,Z). Then, this 3D coordinate is used to retrieved its N neighboring 3D points stored adjacently in the array of points. These 3D points are then used to produce the cross-section image of the selected position. The procedure to visualize cross-sections is summarized as the figure below:


Some of the results is illustrated as the figure below:



Publications:
Tanathong, S., Rudahl, K.T., Goldin, S.E., 2014. Towards visualizing canal cross-section using data acquired from teleoperated boat. To be included in the Proceedings of Asian Conference of Remote Sensing (ACRS 2014), Nay Pyi Taw, Myanmar, 27-31 Oct 2014.

Thursday, July 3, 2014

3D Reconstruction of Canal Profiles using Data Acquired from Teleoperated Boat

Over the past three years, Thailand has experienced widespread flooding in several regions across the country. In fact, flooding has always been a recurring hazard in Thailand, but recently, due to rapid urban development and deforestation, flooding has become more severe. In order to reduce the hazard severity and minimize the area affected by flood, flood management personnel need to know the profiles of canals and waterways. The profile data should, at least, include physical descriptions of canal banks, width between two sides of canal, depth of canal bed, water level, and existing structures along the stretches of waterways. This will allow them to effectively direct the water flow in ways that can reduce the amount of flood water.

In this work, we equip a teleoperated boat with a 2D laser scanner, a single-beam depth sounder, and GPS/IMU sensors to make it possible to describe canal banks and bottom profiles during navigation along the waterway.


The 2D laser scanner documents the environment by emitting its laser pulses from the left side to the right side, which covers 270deg. The figure below illustrates the installation of the laser scanner and the moving direction.


The laser range and depth parameters while the boat documenting canal banks and the bottom profiles can be illustrated as the figure below.

Both laser scanner data and depth data are acquired in their own local coordinate systems. In order for these data to be integrated and be able to produce the picture of waterways, they must be georeferenced into the same coordinate system, principally the world coordinate system (GCS). The transformation from one coordinate system to another is basically a series of transformations, each of which is defined by an individual 3D rotation matrix, from the initial coordinate system in which the data is defined into the destination coordinate system. Here, we transform the laser coordinate system (LCS) into the boat coordinate system (BCS) then the North-East-Down coordinate system (NED) and finally the world coordinate system (GCS).


In this work, the application to reconstruct and visualize canals in 3D is developed in the C/C++ language. In order for the application to present 2D/3D graphical images, we employ the OpenGL library (www.opengl.org), which is free from licensing requirements. The graphical user interface for the end users to operate the application and adjust the visualization is implemented with the Qt library (qt-project.org) under open source license. This is how the application looks like.


Some experimental results:


The program is now completed but I haven't created its snapshop video yet. Here is the application at its 70%-completed status (27 May 2014).



Publications:
Tanathong, S., Rudahl, K.T., Goldin, S.E., 2014. 3D reconstruction of canal profiles using data acquired from teleoperated boat. In: Proceedings of Asia GIS, Chiang Mai, Thailand. [PDF]

Presentations:
GeoFest2014 Seminar, King Mongkut's University of Technology Thonburi [Powerpoint]