Visual odometry implementation

Hi JdeRobot community
I share here my implementation of 2D visual odometry exercise.
Youtube link of video:
I did the following things:

  • Read depth and color images.
  • Detected and tracked FAST corners using Lucas Kanade optical flow algorithm. Selected only good and relevant matches.
  • Created a 3d Point cloud of previous and current image using their depth image(took care of NaN values by excluding the NaN Values in calculating 3D point cloud).
  • Selected 3D point cloud inliers for both previous and current image using consistency matrix M.
    M(i,j) = 1 if eucledian distance between i and j pair is less than very small threshold value say 0.0001.
  • Extracted pose i.e. Rotation and translation matrices from 3D - 3D correspondence using least squares fittig of 3d points set.The least squares algorithm uses SVD.
  • Concatenated rotation and translation to obtain predicted path.
    Some useful paper links and blogs I used:

Hi @DivanshuG05,

nice solution! As any other odometry approach it suffers from error accumulation but the general shape of the trajectory is well estimated. Congrats!.

Using the Depth information you skip the problems of scale uncertainty and degradation common in only-color image approaches.

Well done!

Thanks @jmplaza ,
Yes, the scale ambiguity is a big problem if we don’t use depth images. Can you tell how can we resolve that? Like do we have to use information that point in 3d space is actually located at some coordinate say (x,y,z) and our solution is coming out to be say (x’,y’,z’), calculate the ratio z’/z and then scale our solution by that ratio.

Yes, knowledge of the real size or position of a given object in 3D can be used to adjust the scale of the visual reference system, that is, to ‘calibrate’ the system. But it is not that easy, anyway. Take a look for instance at this paper “Dense, Auto-Calibrating Visual Odometry from a Downward-Looking Camera”.