In this project, I am responsible for the image matching part that will be integrated to the car navigation project. The image matching is implemented based on the Kanade-Lucas-Tomasi (KLT), which is well-known for its computational efficiency, and widely used for real-time applications.
Although KLT is a promising approach to the real-time acquisition of tie-points, extracting tie-points from urban traffic scenes captured from a moving camera is a challenging task. To be used as a source of inputs for the bundle adjustment process, tie-points must not be acquired from moving objects but only from stationary objects. When the camera (observer) is at a fixed position, moving objects can be distinguished from stationary objects by considering the direction and magnitude of optical flow vectors. However, when the camera moves, it also induces optical flows for stationary objects. This makes it difficult to separate them from moving objects. The problem is more complicated on road scenes which involve several moving objects. At this point, the problem of image matching is not only to produce tie-points but also to discard those associated with moving objects.
This study presents an image matching system based on the KLT algorithm. To simplify the aforementioned problem, the built-in sensory data are employed. The sensors offer translation velocity and angular velocity of the camera (in fact, the vehicle that boards the camera). These data can be used to derive the position and attitude parameters of the camera, which will be referred to as preliminary exterior orientation (EO).
We develop our image matching system based on the KLT algorithm. The procedure of the system is presented below. Typically, we perform tracking and output a set of tie-points every second. Since KLT only works when the displacement between frames is small, we thus perform tracking on a number of frames for each second but return a single set of tie-points to AT. In this work, basic outlier removal includes performing (1) cross correlation coefficient, (2) KLT tracking cross-check, and (3) optical flow evaluation, in respective order. For moving object removal, we use initial EOs to perform projection and identify moving objects based on the discrepancy between tracking points and projecting points.
The procedure of the proposed image matching for a car navigation system.
The image matching software is developed in C/C++ based on the OpenCV library (OpenCV 1.1).
The tie-point projection result is presented below:
Some of the image matching results are presented below:
Publication:
Choi, K., Tanathong, S., Kim, H., Lee, I., 2013. Realtime image matching for vision based car navigation with built-in sensory data. Proceedings of ISPRS Annuals of Photogrammetry, Remote Sensing and Spatial Information Sciences, Antalya, Turkey. [PDF]
Showing posts with label Image Matching. Show all posts
Showing posts with label Image Matching. Show all posts
Thursday, July 3, 2014
Fast Image Matching for Real-time Georeferencing using Exterior Orientation Observed from GPS/INS
Advances in science and technology provide new capabilities to improve human security. One example is the area of disaster response, which has become far more effective due to the application of remote sensing and other computing technologies. However, traditional photogrammetric georeferencing techniques which rely on manual control point selection are too slow to meet the challenge as both the number and severity of disasters are increasing worldwide. To use imagery effectively for disaster response, we need so-called real-time georeferencing. That is, it must be possible to obtain accurate external orientation (EO) of photographs or images in real time.
In this study, we present a fast, automated image matching system based on the Kanade-Lucas-Tomasi (KLT) algorithm that, when operated in conjunction with a real-time aerial triangulation (AT), allows the EOs to be determined immediately after image acquisition.
Although KLT shows a promising ability to deliver tie-points to end applications in real time, the algorithm is vulnerable when the adjacent images undergo large displacement or are captured during a sharp turn of the acquisition platform (in our research, an Unmanned Aerial Vehicle or UAV). This can be illustrated as the figure below:
This study proposes to overcome these limitations by determining a good initial approximation for the KLT problem from the EOs observed through GPS/INS. This allows the algorithm to converge to the final solution more quickly.
Integrating the proposed approach into the pyramidal tracking model, the implementation procedure (derived based on the pyramidal implementation from Bouguet, 2000) can be presented as the figure below. The derivation in the figure is based on the translation model, Eq. (3.2).
The related equations are summarized below:
In this research, we also present a mathematical solution to determine the number of depth levels for the image pyramid, which has previously been defined manually by operational personnel. As a result, our system can function automatically without human intervention. For more clarification, please have a look at this document.
In addition, the research work reported here enhances the KLT feature detection to obtain a larger number of features, and introduces geometric constraints to improve the quality of tie-points. This leads to greater success in AT.
The proposed image matching system used for the experimental testing is developed in C/C++ using the Microsoft Visual Studio 2008 framework. The KLT algorithm adopted for this research study is implemented by modifying the OpenCV library version 1.1 (OpenCV, 2012). Most of the auxiliary functions used to deal with images are also based on the OpenCV library. The implementation of the image matching for real-time georeferencing can be illustrated as the figure below.
The goal of the experiment is to present the improvement in accuracy of the directly observed EOs (EOS1, EOS2 and EOS3) after being refined through a bundle adjustment process given tie-points produced by the proposed image matching. In addition, the experiment measures the accuracy of the adjusted EOs (EOA1, EOA2 and EOA3) when a larger number of tie-points is involved in the computation of AT. The accuracy of the initial EOs is listed in the tables below:
The experimental results demonstrate that the image matching system in conjunction with AT can refine the accuracy of all initial EOS1, EOS2 and EOS3. The adjusted EOs have a lower RMS of discrepancy against the true EOs when compared with that of the directly observed EOs. Moreover, the level of accuracy can be further improved by increasing the number of tie-points for AT. For example, the RMS of the initial EOS3 is measured as 2.122m, 1.670m and 1.725m for three position parameters and 1.780deg, 1.790deg and 2.079deg for three attitude parameters. The AT process (std_ip = 15) can improve the accuracy of position parameters by 25% and attitude parameters by 58%, when the maximum number of tie-points is defined as 3×3×3 per stereo image. Increasing the number of tie-points to 3×3×4, the accuracy of the adjusted EOA3 improves by 37% and 69% for position and attitude, when compared with the directly observed EOs. The improvements in accuracy for position and attitude parameters are up to 40% and 70% when the number of tie-points is defined as 3×3×5.The experimental results for this dataset are summarized in the figure below:
Publications:
1. Tanathong, S., Lee, I., 2014. Translation-based KLT tracker under severe camera rotation using GPS/INS data. IEEE Geoscience and Remote Sensing Letters, vol. 11, no. 1, pp. 64-68. [LINK][Sourcecode]
2. Tanathong, S., Lee, I., In Press. Using GPS/INS data to enhance image matching for real-time aerial triangulation. Computers & Geosciences. [LINK]
3. Tanathong, S., Lee, I., Submitted for publication. Accuracy assessment of the rotational invariant KLT tracker and its application to real-time georeferencing. Journal of Applied Remote Sensing. (Revision in progress)
4. Tanathong, S., Lee, I., 2011. A development of a fast and automated image matching based on KLT tracker for real-time image georeferencing. Proceedings of ISRS, Yeosu, Korea. (Student Paper Award)
5. Tanathong, S., Lee, I., 2011. An automated real-time image georeferencing system. Proceedings of IPCV, Las Vegas, USA.
6. Tanathong, S., Lee, I., 2010. Speeding up the KLT tracker for real-time image georeferencing using GPS/INS data. Korean Journal of Remote Sensing, vol. 26, no. 6, pp. 629-644.[LINK][PDF]
7. Tanathong, S., Lee, I., 2010. Towards improving the KLT tracker for real-time image georeferencing using GPS/INS data. Proceedings of 16th Korea-Japan Joint Workshop on Frontiers of Computer Vision, Hiroshima, Japan.
8. Tanathong, S., Lee, I., 2009. The improvement of KLT for real-time feature tracking from UAV image sequence. Proceedings of ACRS, Beijing, China.
In this study, we present a fast, automated image matching system based on the Kanade-Lucas-Tomasi (KLT) algorithm that, when operated in conjunction with a real-time aerial triangulation (AT), allows the EOs to be determined immediately after image acquisition.
Although KLT shows a promising ability to deliver tie-points to end applications in real time, the algorithm is vulnerable when the adjacent images undergo large displacement or are captured during a sharp turn of the acquisition platform (in our research, an Unmanned Aerial Vehicle or UAV). This can be illustrated as the figure below:
This study proposes to overcome these limitations by determining a good initial approximation for the KLT problem from the EOs observed through GPS/INS. This allows the algorithm to converge to the final solution more quickly.
Integrating the proposed approach into the pyramidal tracking model, the implementation procedure (derived based on the pyramidal implementation from Bouguet, 2000) can be presented as the figure below. The derivation in the figure is based on the translation model, Eq. (3.2).
The related equations are summarized below:
In this research, we also present a mathematical solution to determine the number of depth levels for the image pyramid, which has previously been defined manually by operational personnel. As a result, our system can function automatically without human intervention. For more clarification, please have a look at this document.
In addition, the research work reported here enhances the KLT feature detection to obtain a larger number of features, and introduces geometric constraints to improve the quality of tie-points. This leads to greater success in AT.
The proposed image matching system used for the experimental testing is developed in C/C++ using the Microsoft Visual Studio 2008 framework. The KLT algorithm adopted for this research study is implemented by modifying the OpenCV library version 1.1 (OpenCV, 2012). Most of the auxiliary functions used to deal with images are also based on the OpenCV library. The implementation of the image matching for real-time georeferencing can be illustrated as the figure below.
The goal of the experiment is to present the improvement in accuracy of the directly observed EOs (EOS1, EOS2 and EOS3) after being refined through a bundle adjustment process given tie-points produced by the proposed image matching. In addition, the experiment measures the accuracy of the adjusted EOs (EOA1, EOA2 and EOA3) when a larger number of tie-points is involved in the computation of AT. The accuracy of the initial EOs is listed in the tables below:
The experimental results demonstrate that the image matching system in conjunction with AT can refine the accuracy of all initial EOS1, EOS2 and EOS3. The adjusted EOs have a lower RMS of discrepancy against the true EOs when compared with that of the directly observed EOs. Moreover, the level of accuracy can be further improved by increasing the number of tie-points for AT. For example, the RMS of the initial EOS3 is measured as 2.122m, 1.670m and 1.725m for three position parameters and 1.780deg, 1.790deg and 2.079deg for three attitude parameters. The AT process (std_ip = 15) can improve the accuracy of position parameters by 25% and attitude parameters by 58%, when the maximum number of tie-points is defined as 3×3×3 per stereo image. Increasing the number of tie-points to 3×3×4, the accuracy of the adjusted EOA3 improves by 37% and 69% for position and attitude, when compared with the directly observed EOs. The improvements in accuracy for position and attitude parameters are up to 40% and 70% when the number of tie-points is defined as 3×3×5.The experimental results for this dataset are summarized in the figure below:
Publications:
1. Tanathong, S., Lee, I., 2014. Translation-based KLT tracker under severe camera rotation using GPS/INS data. IEEE Geoscience and Remote Sensing Letters, vol. 11, no. 1, pp. 64-68. [LINK][Sourcecode]
2. Tanathong, S., Lee, I., In Press. Using GPS/INS data to enhance image matching for real-time aerial triangulation. Computers & Geosciences. [LINK]
3. Tanathong, S., Lee, I., Submitted for publication. Accuracy assessment of the rotational invariant KLT tracker and its application to real-time georeferencing. Journal of Applied Remote Sensing. (Revision in progress)
4. Tanathong, S., Lee, I., 2011. A development of a fast and automated image matching based on KLT tracker for real-time image georeferencing. Proceedings of ISRS, Yeosu, Korea. (Student Paper Award)
5. Tanathong, S., Lee, I., 2011. An automated real-time image georeferencing system. Proceedings of IPCV, Las Vegas, USA.
6. Tanathong, S., Lee, I., 2010. Speeding up the KLT tracker for real-time image georeferencing using GPS/INS data. Korean Journal of Remote Sensing, vol. 26, no. 6, pp. 629-644.[LINK][PDF]
7. Tanathong, S., Lee, I., 2010. Towards improving the KLT tracker for real-time image georeferencing using GPS/INS data. Proceedings of 16th Korea-Japan Joint Workshop on Frontiers of Computer Vision, Hiroshima, Japan.
8. Tanathong, S., Lee, I., 2009. The improvement of KLT for real-time feature tracking from UAV image sequence. Proceedings of ACRS, Beijing, China.
Subscribe to:
Posts (Atom)