Problem doing Visual odom exercise

I’m trying to do the visual odometry exercise but while reading depth image data, I’m only getting NaN values.Although I’m able to read the RGB data perfectly. I’m not able to get 3D point cloud because of NaN values appearing in Depth image data.
The dataset link for downloading the rosbag file wasn’t working so I downloaded the data from official website of TUM. Any suggestions on how to read depth images properly would be appreciated.
Here is a screenshot of the same.
Screenshot from 2020-12-22 14-58-46|690x387
Thanks in advance !!

I tried saving depth image and saw it which to my surprise looked actually like depth image.
But I’m not understanding why printing the depth array yields NaN values and how opencv is able to convert these NaN values into a depth image. Also in self.set_predicted_pose in why ‘Z’ values are not passed as arguments?. Is it that ‘Z’ values are constant throughout the motion?
Any help on why this is happening would be appreciated.

Here is a Screenshot of depth image !!

Thanks in advance !!

Now, I have problem in getting correct Rotation matrix using 3D 3D correspondence which I am unable to figure out. I’m trying to use least squares fitting of 3D points set as is written this paper but I’m unable to achieve correct results so far. Any help would be highly appreciated.