opencv 如何测量标记分割图像的平均厚度

yftpprvb  于 2022-11-15  发布在  其他
关注(0)|答案(3)|浏览(140)

我有一张图片,我已经对这张图片做了一些预处理。下面我展示了我的预处理:

img= cv2.imread("...my_drive...\\image_69.tif",0)

median=cv2.medianBlur(img,13)
ret, th = cv2.threshold(median, 0 , 255, cv2.THRESH_BINARY+cv2.THRESH_OTSU)
kernel=np.ones((3,15),np.uint8)
closing1 = cv2.morphologyEx(th, cv2.MORPH_CLOSE, kernel, iterations=2)
kernel=np.ones((1,31),np.uint8)
closing2 = cv2.morphologyEx(closing1, cv2.MORPH_CLOSE, kernel)

kernel=np.ones((1,13),np.uint8)
opening1= cv2.morphologyEx(closing2, cv2.MORPH_OPEN, kernel,  iterations=2)

所以,基本上我使用了“阈值过滤”,“关闭”和“打开”,结果看起来像这样:

请注意,当我使用type(opening1)时,我得到的是numpy.ndarray。所以这一步的图像是numpy array,大小为1021 x 1024。
然后我给我的图像贴上标签:

label_image=measure.label(opening1, connectivity=opening1.ndim)
props= measure.regionprops_table (label_image, properties=['label', "area", "coords"])

结果如下所示

请注意,当我使用type(label_image)时,我得到了numpy.ndarray。因此,这一步的图像是numpy array,大小为1021 x 1024。
正如你所看到的,目前的图像有6个标签。这些标签中的一些是短和小块,所以我试图保持顶部2标签的基础上面积

slc=label_image
rps=regionprops(slc)
areas=[r.area for r in rps]

id=np.argsort(props["area"])[::-1]
new_slc=np.zeros_like(slc)

for i in id[:2]:
    new_slc[tuple(rps[i].coords.T)]=i+1

现在,结果如下所示:

看起来我成功地保留了2个顶部区域(请注意,通过改变id[:2],您可以选择最厚的白色层或最薄的层)。现在:

**我想做的:**我想找出这两个区域的平均厚度

另外,请注意,我知道我的每个像素是314纳米
这里有谁能给我出个主意,我该怎么做这项工作?
原始照片:下面我显示了我的原始图像的低质量,所以你有更好的理解为什么我做了所有的预处理

您也可以在此处访问原始照片:https://www.mediafire.com/file/20h66aq83edy1h7/img.7z/file

5fjcxozz

5fjcxozz1#

这里有一种在Python/OpenCV中实现这一点的方法。

  • 读取输入
  • 转换为灰色
  • 二进制阈值
  • 获取轮廓并过滤区域,这样我们只有两条主线
  • 按区域排序
  • 选择第一个(较小且较细)轮廓
  • 在黑色背景上绘制白色填充
  • 获取其 backbone
  • 获取 backbone 的点
  • 拟合一条线到点,并获得 backbone 的旋转Angular
  • 在两个轮廓线上循环,在黑色背景上绘制白色填充的轮廓线。然后旋转到水平线。然后使用np.count_nonzero()从每列的平均厚度中得到线的垂直厚度,并打印该值。
  • 保存中间图像

输入:

import cv2
import numpy as np
import skimage.morphology
import skimage.transform
import math

# read image
img = cv2.imread('lines.jpg')

# convert to grayscale
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)

# threshold
thresh = cv2.threshold(gray, 0, 255, cv2.THRESH_BINARY+cv2.THRESH_OTSU)[1]

# get contours
new_contours = []
img2 = np.zeros_like(thresh, dtype=np.uint8)
contour_img = thresh.copy()
contour_img = cv2.merge([contour_img,contour_img,contour_img])
contours = cv2.findContours(thresh , cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
contours = contours[0] if len(contours) == 2 else contours[1]
for cntr in contours:
    area = cv2.contourArea(cntr)
    if area > 1000:
        cv2.drawContours(contour_img, [cntr], 0, (0,0,255), 1)
        cv2.drawContours(img2, [cntr], 0, (255), -1)
        new_contours.append(cntr)

# sort contours by area
cnts_sort = sorted(new_contours, key=lambda x: cv2.contourArea(x), reverse=False)

# select first (smaller) sorted contour
first_contour = cnts_sort[0]
contour_first_img = np.zeros_like(thresh, dtype=np.uint8)
cv2.drawContours(contour_first_img, [first_contour], 0, (255), -1)

# thin smaller contour
thresh1 = (contour_first_img/255).astype(np.float64)
skeleton = skimage.morphology.skeletonize(thresh1)
skeleton = (255*skeleton).clip(0,255).astype(np.uint8)

# get skeleton points
pts = np.column_stack(np.where(skeleton.transpose()==255))

# fit line to pts
(vx,vy,x,y) = cv2.fitLine(pts, cv2.DIST_L2, 0, 0.01, 0.01)
#print(vx,vy,x,y)
x_axis = np.array([1, 0])    # unit vector in the same direction as the x axis
line_direction = np.array([vx, vy])    # unit vector in the same direction as your line
dot_product = np.dot(x_axis, line_direction)
[angle_line] = (180/math.pi)*np.arccos(dot_product)
print("angle:", angle_line)

# loop over each sorted contour
# draw contour filled on black background
# rotate
# get mean thickness from np.count_non-zeros
black = np.zeros_like(thresh, dtype=np.uint8)
i = 1
for cnt in cnts_sort:
    cnt_img = black.copy()
    cv2.drawContours(cnt_img, [cnt], 0, (255), -1)
    cnt_img_rot = skimage.transform.rotate(cnt_img, angle_line, resize=False)
    thickness = np.mean(np.count_nonzero(cnt_img_rot, axis=0))
    print("line ",i,"=",thickness)
    i = i + 1

# save resulting images
cv2.imwrite('lines_thresh.jpg',thresh)
cv2.imwrite('lines_filtered.jpg',img2)
cv2.imwrite('lines_small_contour_skeleton.jpg',skeleton )

# show thresh and result    
cv2.imshow("thresh", thresh)
cv2.imshow("contours", contour_img)
cv2.imshow("lines_filtered", img2)
cv2.imshow("first_contour", contour_first_img)
cv2.imshow("skeleton", skeleton)
cv2.waitKey(0)
cv2.destroyAllWindows()

阈值图像:

轮廓图像:

过滤后的轮廓图像:

backbone 图像:
x1c4d 1x指令集
Angular (以度为单位)和厚度(以像素为单位):

angle: 3.1869032185349733
line  1 = 8.79219512195122
line  2 = 49.51609756097561

要获得以纳米为单位的厚度,请将以像素为单位的厚度乘以314纳米/像素。

增加

如果我从你的tiff图像开始,下面显示了我的预处理,它和你的类似。

import cv2
import numpy as np
import skimage.morphology
import skimage.transform
import math

# read image
img = cv2.imread('lines.tif')

# convert to grayscale
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)

# threshold
thresh = cv2.threshold(gray, 128, 255, cv2.THRESH_BINARY)[1]

# apply morphology
kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (1,5))
morph = cv2.morphologyEx(thresh, cv2.MORPH_OPEN, kernel)
kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (29,1))
morph = cv2.morphologyEx(morph, cv2.MORPH_CLOSE, kernel)

# get contours
new_contours = []
img2 = np.zeros_like(gray, dtype=np.uint8)
contour_img = gray.copy()
contour_img = cv2.merge([contour_img,contour_img,contour_img])
contours = cv2.findContours(morph , cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
contours = contours[0] if len(contours) == 2 else contours[1]
for cntr in contours:
    area = cv2.contourArea(cntr)
    if area > 1000:
        cv2.drawContours(contour_img, [cntr], 0, (0,0,255), 1)
        cv2.drawContours(img2, [cntr], 0, (255), -1)
        new_contours.append(cntr)

# sort contours by area
cnts_sort = sorted(new_contours, key=lambda x: cv2.contourArea(x), reverse=False)

# select first (smaller) sorted contour
first_contour = cnts_sort[0]
contour_first_img = np.zeros_like(morph, dtype=np.uint8)
cv2.drawContours(contour_first_img, [first_contour], 0, (255), -1)

# thin smaller contour
thresh1 = (contour_first_img/255).astype(np.float64)
skeleton = skimage.morphology.skeletonize(thresh1)
skeleton = (255*skeleton).clip(0,255).astype(np.uint8)

# get skeleton points
pts = np.column_stack(np.where(skeleton.transpose()==255))

# fit line to pts
(vx,vy,x,y) = cv2.fitLine(pts, cv2.DIST_L2, 0, 0.01, 0.01)
#print(vx,vy,x,y)
x_axis = np.array([1, 0])    # unit vector in the same direction as the x axis
line_direction = np.array([vx, vy])    # unit vector in the same direction as your line
dot_product = np.dot(x_axis, line_direction)
[angle_line] = (180/math.pi)*np.arccos(dot_product)
print("angle:", angle_line)

# loop over each sorted contour
# draw contour filled on black background
# rotate
# get mean thickness from np.count_non-zeros
black = np.zeros_like(thresh, dtype=np.uint8)
i = 1
for cnt in cnts_sort:
    cnt_img = black.copy()
    cv2.drawContours(cnt_img, [cnt], 0, (255), -1)
    cnt_img_rot = skimage.transform.rotate(cnt_img, angle_line, resize=False)
    thickness = np.mean(np.count_nonzero(cnt_img_rot, axis=0))
    print("line ",i,"=",thickness)
    i = i + 1

# save resulting images
cv2.imwrite('lines_thresh2.jpg',thresh)
cv2.imwrite('lines_morph2.jpg',morph)
cv2.imwrite('lines_filtered2.jpg',img2)
cv2.imwrite('lines_small_contour_skeleton2.jpg',skeleton )

# show thresh and result    
cv2.imshow("thresh", thresh)
cv2.imshow("morph", morph)
cv2.imshow("contours", contour_img)
cv2.imshow("lines_filtered", img2)
cv2.imshow("first_contour", contour_first_img)
cv2.imshow("skeleton", skeleton)
cv2.waitKey(0)
cv2.destroyAllWindows()

阈值图像:

形态图像:


指令集
已筛选的线条图像:

backbone 图像:


指令集
Angular (度)和厚度(像素):

angle: 3.206927978669998
line  1 = 9.26171875
line  2 = 49.693359375
3wabscal

3wabscal2#

  • 使用Deskew拉直图像。

  • 然后,计算要测量的标签颜色的每列像素数,然后除以列数,得到平均厚度
c90pui9n

c90pui9n3#

这可以通过scipy中的各种工具来完成。

I = PIL.Image.open("input.jpg")
img = np.array(I).mean(axis=2)
mask = img==255  # or some kind of thresholding
imshow(mask)   #note this is a binary image, the green coloring is due to some kind of rendering artifact or aliasing

如果放大,可以看到分割区域

为了避免这个问题我们可以扩张面罩

from scipy import ndimage as ni
mask1 = ni.binary_dilation(mask, iterations=2)
imshow(mask1)

现在,我们可以找到连接区域,并找到具有最多像素的顶部区域,这应该是感兴趣的两条线:

lab, nlab = ni.label(mask1)
max_labs = np.argsort([ (lab==i).sum() for i in range(1, nlab+1)])[::-1]+1
imshow(lab==max_labs[0])

以及imshow(lab==max_labs[1]) x1c4d 1x
以第一行为例:

from scipy.stats import linregress
y0,x0 = np.where(lab==max_labs[0])
l0 = linregress( x0, y0)
xi,yi =  np.arange(img.shape[3]), np.arange(img.shape[3])*l0.slope + l0.intercept
plot( xi, yi, 'r--')

在不同y轴截距处沿着此区域插值,并计算沿每条线的平均信号

from scipy.interpolate import RectBivariateSpline
img0 = img.copy()
img0[~(lab==max_labs[0])] = 0  # set everything outside this line region to 0
rbv = RectBivariateSpline(np.arange(img.shape[0]), np.arange(img.shape[1]), img0)
prof0 = [rbv.ev(yi+i, xi).mean() for i in np.arange(-300,300)]  # pick a wide window here (-300,300), can be more technical, but not necessary
plot(prof0)


指令集
使用您最喜欢的方法计算此轮廓的FWHM,然后乘以您的像素-纳米系数。
我会用高斯拟合来计算fwhm

xvals = np.arange(len(prof0))
yvals = np.array(prof0)

def func(p, xvals, yvals):
    mu,var, amp = p
    model = np.exp(-(xvals-mu)**2/2/var)*amp
    resid = (model-yvals)**2
    return resid.sum()
from scipy.optimize import minimize
x0  = 300,200,255 # initial estimate of mu, variance, amplitude 
fit_gauss = minimize(func, x0=x0, args=(xvals, yvals), method='Nelder-Mead')

mu, var, amp = fit_gauss.x
fwhm = 2.355 * np.sqrt(var)

# display using matplotlib plot /hlines
plot( xvals, yvals)
plot( xvals, amp*np.exp(-(xvals-mu)**2/2/var) )
hlines(amp*0.5, mu-fwhm/2., mu+fwhm/2, color='r')
legend(("profile","fit gauss","fwhm=%.2f pix" % fwhm))

最后,thickness=fwhm*314,或大约13微米
对第二条线(lab==max_labs[1])采用完全相同的方法,得到约2.2微米的厚度:

指令集
注意,我是使用交互式绘图来做这个例子的,因此调用imshowplot等只是为了给读者提供参考。你可能需要采取额外的步骤来重新创建我上传的图像(缩放等)。

相关问题