matlab 鱼眼外型VS EstiateWorldCameraPose

bmvo0sr5  于 2022-11-15  发布在  Matlab
关注(0)|答案(2)|浏览(231)

在使用MatLab校准鱼眼相机时,我得到了鱼眼内部参数:

% Calibrate the camera using fisheye parameters
[cameraParams, imagesUsed, estimationErrors] = estimateFisheyeParameters(imagePoints, worldPoints, ...
    [mrows, ncols], ...
    'EstimateAlignment', true, ...
    'WorldUnits', 'millimeters');

该函数的输出是Scaramuzza参数,其形状与经典的3x3本质矩阵非常不同。
从这个内在参数中,我想估计我的一个校准图案的姿势。到目前为止,我找到了两个解决方案,但我不知道哪一个更准确。
首先,我发现我可以直接向extrinsics函数提供当前的内部函数:

% Extract intrinsics parameters
intrinsics = cameraParams.Intrinsics;

% Compute Rt matrix
[R,t] = extrinsics(imagePoints,worldPoints,intrinsics);

在函数内部,我可以看到这个方法使用的是单应函数,但有点像Scaramuzza的本征参数。它是用于鱼眼和非鱼眼模型的相同函数。你知道斯卡拉穆扎的参数是在这里处理的吗?
第二种解决方案是使用函数estimateWorldCameraPose,该函数使用P3P和下面的RANSAC。此函数不接受原始鱼眼参数。我找到的一个解决方案(https://fr.mathworks.com/matlabcentral/answers/548787-function-estimateworldcamerapose-or-extrinsics-for-fisheyeparameters-is-missing-is-it-possible?s_tid=answers_rc1-2_p2_MLT)使用函数undistortFisheyeImage作为临时解决方案来提取3x3内部参数:

[J,camIntrinsics] = undistortFisheyeImage(I,intrinsics)

然后,我可以在estimateWorldCameraPose中添加新的内部函数。
这个解决方案更好吗?这个新的本质矩阵的可靠性有多高?

w8rqjzmb

w8rqjzmb1#

1.在所有的标定方法(无论是普通透镜还是鱼眼透镜)中,非本征估计都使用未失真的像素点。因此,MatLab函数“Extrinsics”也应该有未失真部分(使用Scaramuzza的镜头失真模型未失真)。看一看代码“Extrinsics”,我发现当它是鱼眼模型时,它是单独处理内部的。

if ~isa(cameraParams, 'fisheyeIntrinsics')
    if isa(cameraParams, 'cameraParameters')
        intrinsicParams = cameraParams;
    else
        intrinsicParams = cameraParams.CameraParameters;
    end

    intrinsics = intrinsicParams.IntrinsicMatrix;

    [rotationMatrix, translationVector] = vision.internal.calibration.extrinsicsPlanar(...
        imagePoints, worldPoints, intrinsics);
else

1.这是唯一的解决办法,也是实现这一目标的唯一途径。

lnvxswe2

lnvxswe22#

extrinsics函数同时接受鱼眼模型和非鱼眼模型,如果接受鱼眼模型,则输入参数imagePoints为畸变点,而针孔(非鱼眼)模型,则必须接受未失真的图像点。它的内部实现都是vision.internal.calibration.extrinsicsPlanar.m函数。

function [R,T] = extrinsicsPlanar(imagePoints, worldPoints, intrinsics)

A = intrinsics';

% Compute homography.
H = fitgeotrans(worldPoints, imagePoints, 'projective');
H = H.T';
h1 = H(:, 1);
h2 = H(:, 2);
h3 = H(:, 3);

lambda = 1 / norm(A \ h1);

% Compute rotation
r1 = A \ (lambda * h1);
r2 = A \ (lambda * h2);
r3 = cross(r1, r2);
R = [r1'; r2'; r3'];

% R may not be a true rotation matrix because of noise in the data.
% Find the best rotation matrix to approximate R using SVD.
[U, ~, V] = svd(R);
R = U * V';

% Compute translation vector.
T = (A \ (lambda * h3))';

Extrinsics从平面校准图案计算相机Extrinics,因此此功能适合您。
在处理鱼眼像素点之前(在extrinsicsPlanar函数之前),使用fisheyeIntrinsics类中的隐藏方法执行像素点消除锯齿。

%------------------------------------------------------------------
        % Determine the normalized 3D vector on the unit sphere
        %------------------------------------------------------------------
        function worldPoints = imageToNormalizedVector(this, imagePoints)
            %imageToNormalizedVector Determine the normalized 3D vector on
            %the unit sphere.
            %  worldPoints = imageToNormalizedWorld(intrinsics,imagePoints)
            %  maps image points to the normalized 3D vector emanating from
            %  the single effective viewpoint on the unit sphere.
            %
            %  Inputs:
            %  -------
            %  intrinsics        - fisheyeIntrinsics object.
            %
            %  imagePoints       - M-by-2 matrix containing [x, y]
            %                      coordinates of image points. M is the
            %                      number of points.
            %
            %  Output:
            %  -------
            %  worldPoints       - M-by-3 matrix containing corresponding
            %                      [X,Y,Z] coordinates on the unit sphere.
            
            points = vision.internal.inputValidation.checkAndConvertPoints(...
                imagePoints, 'fisheyeIntrinsics', 'imagePoints');
            
            if isa(points, 'single')
                points = double(points);
            end
            
            center = double(this.DistortionCenter);
            stretch = double(this.StretchMatrix);
            coeffs = double(this.MappingCoeffsInternal);
            
            % Convert image points to sensor coordinates
            points(:, 1) = points(:, 1) - center(1);
            points(:, 2) = points(:, 2) - center(2);
            points = stretch \ points';
            
            rho = sqrt(points(1, :).^2 + points(2, :).^2);
            f = polyval(coeffs(end:-1:1), rho);
            
            % Note, points could be invalid if f < 0
            worldPoints = [points; f]';
            
            nw = sqrt(sum(worldPoints.^2, 2));
            nw(nw == 0) = eps;
            
            worldPoints = worldPoints ./ horzcat(nw,nw, nw);
            worldPoints = cast(worldPoints, class(imagePoints));
        end

estimateWorldCameraPose函数输入参数worldPoints接受3列位置。但你使用的坐标z=0,可能是这个函数返回的status是1或2。所以你后面的解决方案可能不会更好。

相关问题