我有两个摄像机和一个由贴在地板上的贴纸定义的坐标系。我试图使用opencv立体摄像机设置来校准每个摄像机,然后找到两个摄像机图像中存在的特征的世界毫米位置。下面是每个摄像机拍摄的坐标系和特征的图像。
- 黄色圆圈贴纸:位于世界原点
- 绿色圆圈贴纸:位于世界(500 mm,0,0)
- 红圈贴纸:位于世界(0,500 mm,0)
- 非常小的黄色贴纸:位于世界(151毫米,194毫米,0)
还有一个非常小的红色贴纸,这是我试图找到世界毫米位置的功能。这个非常小的红色贴纸位于相机1的图像中的像素位置(1175,440)和相机2的图像中的像素位置(1401,298)。
我的基本方法是为每个摄像头调用cv::calibrateCamera,然后调用cv::stereoCalibrate,然后调用cv::stereoRectify,最后调用cv::triangulatePoints来获取特征的世界毫米位置。我得到了一个答案,但似乎是胡说八道。下面是我使用的代码。
// The four real world calibration points, expressed in millimeters
// The yellow circle is the origin
// The green circle is on the x-axis at 500mm away from the origin
// The red circle is on the y-axis at 500mm away from the origin
// The fourth calibration point is a small yellow sticker at (151mm, 194mm, 0mm)
// The z-axis is up, perpendicular to the floor
std::vector<cv::Point3f> _ObjectPointsWorldMM;
_ObjectPointsWorldMM.push_back(cv::Point3f(0.0f, 0.0f, 0.0f));
_ObjectPointsWorldMM.push_back(cv::Point3f(500.0f, 0.0f, 0.0f));
_ObjectPointsWorldMM.push_back(cv::Point3f(0.0f, 500.0f, 0.0f));
_ObjectPointsWorldMM.push_back(cv::Point3f(151.0f, 194.0f, 0.0f));
std::vector<std::vector<cv::Point3f>> ObjectPointsWorldMM;
ObjectPointsWorldMM.push_back(_ObjectPointsWorldMM);
//
// Camera 1 calibrateCamera()
//
// Get the camera 1 image
cv::Mat mCamera1Image = cv::imread(std::string("Camera1.jpg"), CV_LOAD_IMAGE_COLOR);
// The pixel locations in the camera 1 image that correspond to the four calibration points described above
std::vector<cv::Point2f> _Camera1ImagePointsPx;
_Camera1ImagePointsPx.push_back(cv::Point2f(791.0f, 220.0f)); // Corresponds to yellow origin sticker
_Camera1ImagePointsPx.push_back(cv::Point2f(864.0f, 643.0f)); // Corresponds to green x=500mm sticker
_Camera1ImagePointsPx.push_back(cv::Point2f(1277.0f, 113.0f)); // Corresponds to red y=500mm sticker
_Camera1ImagePointsPx.push_back(cv::Point2f(1010.0f, 287.0f)); // Corresponds to small yellow sticker at fourth calibration point (see above)
std::vector<std::vector<cv::Point2f>> Camera1ImagePointsPx;
Camera1ImagePointsPx.push_back(_Camera1ImagePointsPx);
// Calibrate camera 1
cv::Mat mCamera1IntrinsicMatrix;
cv::Mat mCamera1DistortionCoefficients;
std::vector<cv::Mat> Camera1RotationVecs;
std::vector<cv::Mat> Camera1TranslationVecs;
const double dCamera1RMSReProjectionError = cv::calibrateCamera(
ObjectPointsWorldMM, // In: The world millimeter locations of the four calibration points
Camera1ImagePointsPx, // In: The pixel locations in the camera 1 image that correspond to the four calibration points
mCamera1Image.size(), // In: The size of the camera 1 calibration image
mCamera1IntrinsicMatrix, mCamera1DistortionCoefficients, // Out: The camera intrinsic matrix and distortion coefficients
Camera1RotationVecs, Camera1TranslationVecs // Out: The camera rotation and translation vectors
);
//
// Camera 2 calibrateCamera()
//
// Get the camera 2 image
cv::Mat mCamera2Image = cv::imread(std::string("Camera2.jpg"), CV_LOAD_IMAGE_COLOR);
// Check assumptions
assert((mCamera1Image.size() == mCamera2Image.size()));
// The pixel locations in the camera 2 image that correspond to the four calibration points described above
std::vector<cv::Point2f> _Camera2ImagePointsPx;
_Camera2ImagePointsPx.push_back(cv::Point2f(899.0f, 439.0f)); // Corresponds to yellow origin sticker
_Camera2ImagePointsPx.push_back(cv::Point2f(1472.0f, 608.0f)); // Corresponds to green x=500mm sticker
_Camera2ImagePointsPx.push_back(cv::Point2f(1101.0f, 74.0f)); // Corresponds to red y=500mm sticker
_Camera2ImagePointsPx.push_back(cv::Point2f(1136.0f, 322.0f)); // Corresponds to small yellow sticker at fourth calibration point (see above)
std::vector<std::vector<cv::Point2f>> Camera2ImagePointsPx;
Camera2ImagePointsPx.push_back(_Camera2ImagePointsPx);
// Calibrate camera 2
cv::Mat mCamera2IntrinsicMatrix;
cv::Mat mCamera2DistortionCoefficients;
std::vector<cv::Mat> Camera2RotationVecs;
std::vector<cv::Mat> Camera2TranslationVecs;
const double dCamera2RMSReProjectionError = cv::calibrateCamera(
ObjectPointsWorldMM, // In: The world millimeter locations of the four calibration points
Camera2ImagePointsPx, // In: The pixel locations in the camera 2 image that correspond to the four calibration points
mCamera2Image.size(), // In: The size of the camera 2 calibration image
mCamera2IntrinsicMatrix, mCamera2DistortionCoefficients, // Out: The camera intrinsic matrix and distortion coefficients
Camera2RotationVecs, Camera2TranslationVecs // Out: The camera rotation and translation vectors
);
//
// stereoCalibrate()
//
// Calibrate the stereo camera set up
cv::Mat InterCameraRotationMatrix, InterCameraTranslationMatrix;
cv::Mat InterCameraEssentialMatrix, InterCameraFundamentalMatrix;
const double dStereoCalReProjectionError = cv::stereoCalibrate(
ObjectPointsWorldMM, // In: The world millimeter locations of the four calibration points
Camera1ImagePointsPx, // In: The pixel locations in the camera 1 image that correspond to the four calibration points
Camera2ImagePointsPx, // In: The pixel locations in the camera 2 image that correspond to the four calibration points
mCamera1IntrinsicMatrix, mCamera1DistortionCoefficients, // In: Camera 1's intrinsic matrix and distortion coefficients
mCamera2IntrinsicMatrix, mCamera2DistortionCoefficients, // In: Camera 2's intrinsic matrix and distortion coefficients
mCamera1Image.size(), // In: The size of each image
InterCameraRotationMatrix, InterCameraTranslationMatrix, // Out: The inter-camera rotation and translation matrices
InterCameraEssentialMatrix, InterCameraFundamentalMatrix // Out: The inter-camera essential and fundamental matrices
);
//
// stereoRectify()
//
// Compute rectification transforms for each head of the calibrated stereo camera
cv::Mat Camera1RectificationTransform, Camera2RectificationTransform;
cv::Mat Camera1ProjectionMatrix, Camera2ProjectionMatrix;
cv::Mat DisparityToDepthMappingMatrix;
cv::stereoRectify(
mCamera1IntrinsicMatrix, mCamera1DistortionCoefficients, // In: Camera 1's intrinsic matrix and distortion coefficients
mCamera2IntrinsicMatrix, mCamera2DistortionCoefficients, // In: Camera 2's intrinsic matrix and distortion coefficients
mCamera1Image.size(), // In: The size of each image
InterCameraRotationMatrix, InterCameraTranslationMatrix, // In: The inter-camera rotation and translation matrices
Camera1RectificationTransform, Camera2RectificationTransform, // Out: The 3x3 rectification transforms for each camera
Camera1ProjectionMatrix, Camera2ProjectionMatrix, // Out: The 3x4 projection matrices for each camera
DisparityToDepthMappingMatrix // Out: The 4x4 disparity-to-depth mapping matrix
);
//
// triangulatePoints()
//
// The pixel location of each feature to find in the camera 1 image
std::vector<cv::Point2f> FeaturePointsCamera1;
FeaturePointsCamera1.push_back(cv::Point2f(1175.0f, 440.0f)); // The pixel location of the small red sticker
// The pixel location of each feature to find in the camera 2 image
std::vector<cv::Point2f> FeaturePointsCamera2;
FeaturePointsCamera2.push_back(cv::Point2f(1401.0f, 298.0f)); // The pixel location of the small red sticker
// Perform triangulation to find the feature(s)
cv::Mat FeatureLocationHomog;
cv::triangulatePoints(
Camera1ProjectionMatrix, Camera2ProjectionMatrix, // In: The 3x4 projection matrices for each camera
FeaturePointsCamera1, FeaturePointsCamera2, // In: The feature pixel points for each camera
FeatureLocationHomog // Out: The reconstructed feature locations in homogeneous coordinates
);
// Check assumptions
assert((FeatureLocationHomog.cols == static_cast<int>(FeaturePointsCamera1.size())));
我得到的齐次坐标是:
[-0.60382962,-0.0076272688,0.79707688,0.00036873418]
我对齐次坐标的理解是,将[a,B,c,d]转换为非齐次坐标,答案将是[a/d,B/d,c/d],因此我的齐次向量将是:
【-1637.574,-20.685,2161.657】
这显然是不对的有人能给我指个方向吗
2条答案
按热度按时间u4vypkhs1#
我不再相信下面的代码是正确的。我正在努力弄清楚这一点。我会在完成后发布最终的正确答案。我希望我下面的原始代码没有误导任何人。
我刚刚发现了问题所在。与其从stereoRectify获取每个摄像机的投影矩阵,我应该这样做:
然后,我可以像在我的原始帖子中那样在cv::triangulatePoints调用中使用这些投影矩阵中的每一个。希望这对其他人有帮助。
rsl1atfo2#
好吧,我相信我有一个比我原来的更好的答案。希望它会对某些人有用。