opencv 如何延长一个扭曲的透视图像一旦扭曲?

bnl4lu3b  于 2023-10-24  发布在  其他
关注(0)|答案(1)|浏览(133)

我正在尝试做一个“扩展”的warpPerspective,我的想法是确定4个点来找到一个单应矩阵,然后做一个warpPerspective来以“正确的方式”获得输出图像(请观察resistance.jpeg和warp.png)
但我需要的是,确定了这4个点,我想有盒子的其余部分(当然你可以问我,为什么我采取内部广场,而不是盒子的4个点?,让我说我不能总是有外部4个点,这是一个例子,我想在一个项目上实现我的工作)(请遵守预期)
我正在使用Python 3.11和OpenCV 4.7 resistance.jpeg - original imagewarp.png - warped imageWhat I expect - Expectation based on internal 4 points

import cv2
import numpy as np

src = cv2.imread('resistance.jpeg')

src_pts = [[195,470],[144,1406],[1151,1430],[1130,394]]
src_pts = np.array(src_pts)
dst_pts =  [[0,0], [0,700], [696,700],[696,0]]
dst_pts = np.array(dst_pts)

mat, some = cv2.findHomography(srcPoints=src_pts, dstPoints=dst_pts)
out = cv2.warpPerspective(src, mat, (696, 700))

cv2.imwrite('warp.png',out)

我尝试操作cv2.warpPerspective元组,但如果设置为例如(772,1044)(框的尺寸大约是多少像素),它会从右下方扩展,我认为这是由于扭曲的图像(warp.png)从(0,0)开始

6yt4nkrj

6yt4nkrj1#

你传递的第三个参数是输出图像的大小,你让它等于校正后的内部矩形的大小,所以这就是你得到的。
我们要传递的是矩形的大小,这个矩形是用我们找到的单应性对长方体的四个外顶点进行校正后得到的。取长方体的左上角和右下角的原始坐标。使用cv2.perspectiveTrasform(array_of_outer_points, mat)找到它们的扭曲坐标,然后计算它们在x和y上的差:这些是我们需要插入cv2.warpPerspective的宽度和高度。
针对评论进行编辑
这是我在一个快速肮脏的可乐笔记本里找到的

import os
import numpy as np
import cv2
from google.colab import files
from io import BytesIO
from PIL import Image
import matplotlib.pyplot as plt

# Load the image from local disk
os.unlink('resistance.jpg')
uploaded=files.upload()

resistance.jpg(image/jpeg)- 546359字节,最后修改:不适用- 100%完成
将resistance.jpg保存为resistance.jpg

# Convert it to numpy array
in_img = np.array(Image.open(BytesIO(uploaded['resistance.jpg'])))

# Using Gimp, identify the outer corners of the box, and the
# "rounded" corners of the inner square 
outer_corners = np.float32(np.array(((144,261), (1194,138), (1249,1767), (51,1680))))
inner_corners = np.float32(np.array(((207,486),(1110,414),(1137,1398),(162,1383))))

# Map the inner corners to an axis-aligned square, offset 
# enough from the image left and top side so that the outer
# corners will fit in the image after warping. I arbitrarily
# choose the size to be 800x800, but if unit scaling were
# needed I would compute the area of the inner_corners
# quadrilateral, and make up a square of the same area.
out_inner_corners = np.float32(np.array(((400,400),(1200,400),(1200,1200),(400,1200))))

# Estimate the homography mapping inner_corners to out_inner_corners
H=cv2.getPerspectiveTransform(inner_corners, out_inner_corners)
print(H)

2019 - 06 - 22 00:00:00
[ 1.31087874e-01 1.07378738e+00 -1.19866159e+02]
[ 1.21497242e-04 9.80952996e-05 1.00000000e+00]]

# See where the outer corners would be mapped into. 
out_outer_corners = cv2.perspectiveTransform(outer_corners.reshape((-1,4,2)), H)
print(out_outer_corners)

[[[325.53592 171.86201]
[1274.9574 159.53275]
[1272.9131 1464.9971 ]
[ 314.19254 1443.8832 ]]

# All looks good, the outer corners are all inside an image of the same shape
# as the input one, so we can warp easily:
out_img = cv2.warpPerspective(in_img, H, dsize=(2048,1536))

# Display it
plt.imshow(out)

# Since we know where the outer corners are mapped, we can easily crop
corners = out_outer_corners.reshape((4,2))
out_top = int(np.round(np.min(corners[:, 1])))
out_bottom = int(np.round(np.max(corners[:, 1])))
out_left = int(np.round(np.min(corners[:, 0])))
out_right = int(np.round(np.max(corners[:, 0])))

out_img_cropped = out_img[out_top:out_bottom, out_left:out_right]
plt.imshow(out_img_cropped)

相关问题