我正在尝试从视差图中估计一个点的深度。首先,我进行了立体校准并校正了图像,然后找到了视差图。我使用了OpenCV中的StereoSGBM。由于视差是指立体对的左右图像中两个对应点之间的距离,因此:
x_右= x_左+差异
通过校准,我获得了外部和内部参数,然后计算了基线和焦距。
## Read image
img_left=cv.imread('images/testLeft/testL0.png')
img_right=cv.imread('images/testRight/testR0.png')
# Grayscale Images
frame_left = cv.cvtColor(img_left,cv.COLOR_BGR2GRAY)
frame_right = cv.cvtColor(img_right,cv.COLOR_BGR2GRAY)
# Undistort and rectify images
frame_left_rect = cv.remap(frame_left, stereoMapL_x, stereoMapL_y, cv.INTER_LANCZOS4, cv.BORDER_CONSTANT,0)
frame_right_rect = cv.remap(frame_right, stereoMapR_x, stereoMapR_y, cv.INTER_LANCZOS4, cv.BORDER_CONSTANT,0)
# Creating an object of StereoBM algorithm
Left_matcher = cv.StereoSGBM_create(
minDisparity=-1, numDisparities=16*3,
blockSize=5,
P1=8 * 2 * blockSize**2,
P2=32 * 2 * blockSize**2,
disp12MaxDiff=1,
uniquenessRatio=10,
speckleWindowSize=100,
speckleRange=32,
mode=cv.STEREO_SGBM_MODE_SGBM_3WAY
#===========================================================================
# Compute Disparity Map
#===========================================================================
disparity = Left_Matcher.compute(frame_left_rect, frame_right_rect)
# Convert to float32 and divide by 16 - read documentation for point cloud
disparity = np.float32(np.divide(disparity,16.0))
disp_test = cv.applyColorMap(np.uint8(disparity), cv.COLORMAP_PLASMA)
cv.imshow("Disparity Map",disp_test)
#==========================================================================
# Depth Map
#==========================================================================
depth_map = np.ones(disparity.shape)
# Focal Length - Pixels | Baseline -cm | Depth_map - cm
depth_map = focal_length * Baseline /disparity
我的问题是深度是错误的。有没有人能帮助我如何使用视差图来达到深度。我可能会使用reprojectImageTo3D,但我认为我的视差图有问题。
第一次
1条答案
按热度按时间v64noz0r1#
检查摄像机参数fx、fy、Cx、Cy是否与图像的空间维度一致。