opencv 是否可以在无参考对象的图像中计算对象的实时距离?

ehxuflar  于 2022-11-15  发布在  其他
关注(0)|答案(1)|浏览(143)

I have a picture of human eye taken roughly 10cm away using a mobile phone(no specifications regarding the camera). After some detection and contouring, I got 113px as the Euclidean distance between the center of the detected iris and the outermost edge of iris on the taken image. Dimensions of the image: 483x578px.
I tried converting the pixels into mm by simply multiplying the number of pixels with the size of a pixel in mm since 1px is roughly equal to 0.264mm which gives the proper length only if the image is in 1:1 ratio wrt to the real-time eye which is not the case here.

  • Edit:*
  • Device used: One Plus 7T*
  • View of range = 117 degrees*
  • Aperture = f/2.2*
  • Distance photo was taken = 10 cm (approx)*
    Question:

Is there an optimal way to find the real time radius of this particular eye with the amount of information I have gathered through processing thus far and by not including a reference object within the image?

  • P.S. The actual HVID of the volunteer's iris is 12.40mm taken using Sirus(A hi-end device to calculate iris radius and I'm trying to simulate the same actions using Python and OpenCV)*

eeq64g8w

eeq64g8w1#

After months I was able to come up with the result after ton of research and lots of trials and errors. This is not the most ideal answer but it gave me expected results with decent precision.
Simply, In order to measure object size/distance from the image we need multiple parameters. In my case, I was trying to measure the diameter of iris from a smart phone camera.
To make that possible we need to know the following details prior to the calculation

1. The Size of the physical sensor (height and width) (usually in mm) (camera inside the smart phone whose details can be obtained from websites on the internet but you need to know the exact brand and version of the smart phone used)

Note: You cannot use random values for these, otherwise you will get inaccurate results. Every step/constraint must be considered carefully.

2. The Size of the image taken (pixels).

Note: Size of the image can be easily obtained used img.shape but make sure the image is not cropped. This method relies on the total width/height of the original smartphone image so any modifications/inconsistencies would result in inaccurate results.

3. Focal Length of the Physical Sensor (mm)

Note: Info regarding focal length of the sensor used can be acquired from the internet and random values should not be given. Make sure you are taking images with auto focus feature disabled so the focal length is preserved. Incase if you have auto focus on then the focal length will be constantly changing and the results will be all over the place.

4. Distance at which the image is taken (Very Important)

Note: As "Christoph Rackwitz" told in the comment section. The distance from which the image is taken must be known and should not be arbitrary. Head cannoning a number as input will always result in inaccuracy for sure. Make sure you properly measure the distance from sensor to the object using some sort of measuring tool. There are some depth detection algorithms out there in the internet but they are not accurate in most cases and need to calibrated after every single try. That is indeed an option if you dont have any setup to take consistent photos but inaccuracies are inevitable especially in objects like iris which requires medical precision.
Once you have gathered all these "proper" information the rest is to dump these into a very simple equation which is a derivative of the "Similar Traingles".

Object height/width on sensor (mm) = Sensor height/width (mm) × Object height/width (pixels) / Sensor height/width (pixels)
Real Object height (in units) = Distance to Object (in units) × Object height on sensor (mm) / Focal Length (mm)

In the first equation, you must decide from which axis you want to measure. For instance, if the image is taken in portrait and you are measuring the width of the object on the image, then input the width of the image in pixels and width of the sensor in mm
Sensor height/width in pixels is nothing but the size of the "image"
Also you must acquire the object size in pixels by any means.
If you are taking image in landscape, make sure you are passing the correct width and height.
Equation 2 is pretty simple as well.

Things to consider:

  1. No magnification (Digital magnification can destroy any depth info)
  2. No Autofocus (Already Explained)
  3. No cropping/editing image size/resizing (Already Explained)
  4. No image skewing.(Rotating the image can make the image unfit)
  5. Do not substitute random values for any of these inputs (Golden Advice)
  6. Do not tilt the camera while taking images (Tilting the camera can distort the image so the object height/width will be altered)
  7. Make sure the object and the camera is exactly in the same line
  8. Don't use EXIF data of the image (EXIF data contains depth information which is absolute garbage since they are not accurate at all. DO NOT CONSIDER THEM)

Things I'm unsure of till now:

  1. Lens distortion / Manufacturing defects
  2. Effects of field of view
  3. Perspective Foreshortening due to camera tilt
  4. Depth field cameras
    DISCLAIMER: There are multiple ways to solve this issue but I chose to use this method and I highly recommend you guys to explore more and see what you can come up with. You can basically extend this idea to measure pretty much any object using a smartphone (given the images that a normal smart phone can take) (Please don't try to measure the size of an amoeba with this. Simply won't work but you can indeed take some of the advice I have gave for your advantage)
    If you have cool ideas and issues with my answers. Please feel free to let me know I would love to have discussions. Feel free to correct me if I have made any mistakes and misunderstood any of these concepts.
    Final Note:

不管你怎么努力,你都无法让智能手机这样的东西像相机传感器一样工作和工作,相机传感器是专门设计用来拍摄图像以进行测量的。智能手机永远无法击败它们,但我们可以在一定程度上操纵智能手机相机来实现类似的结果。所以你们必须记住这一点,我是通过艰苦的方式学到的

相关问题