opencv release()的含义是什么?

dfty9e19  于 2023-03-19  发布在  其他
关注(0)|答案(5)|浏览(480)

我正在用一个raspberry pi来捕捉一个视频的前20帧。现在这更多的是一个概念问题,但在阅读关于videoCapture的openCV文档时,他们强调了在以下代码中释放捕捉的重要性(正如他们的网站上发布的那样):

import numpy as np
import cv2

cap = cv2.VideoCapture(0)

while(True):
    # Capture frame-by-frame
    ret, frame = cap.read()

    # Our operations on the frame come here
    gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)

    # Display the resulting frame
    cv2.imshow('frame',gray)
    if cv2.waitKey(1) & 0xFF == ord('q'):
        break

# When everything done, release the capture
cap.release()
cv2.destroyAllWindows()

cap.release()的重要性是什么?省略这一行是否会影响内存?如果是,它们是什么?为什么?

zdwk9cvp

zdwk9cvp1#

当调用cap.release()时,则:
1.发布软件资源
1.释放硬件资源
在调用cap.release()之前,可以尝试创建另一个示例cap2 = cv2.VideoCapture(0)

cap = cv2.VideoCapture(0)
#cap.release() 

cap2 = cv2.VideoCapture(0)

由于您尚未释放摄像头设备资源,因此将引发Device or resource busy等错误,导致引发OpenCV异常。

libv4l2: error setting pixformat: Device or resource busy
VIDEOIO ERROR: libv4l unable to ioctl S_FMT
libv4l2: error setting pixformat: Device or resource busy
libv4l1: error setting pixformat: Device or resource busy
VIDEOIO ERROR: libv4l unable to ioctl VIDIOCSPICT

libv4l2: error setting pixformat: Device or resource busy
OpenCV Error: Unspecified error (GStreamer: unable to start pipeline
) in cvCaptureFromCAM_GStreamer, file /home/xxx/Programs/OpenCV/src/opencv-master/modules/videoio/src/cap_gstreamer.cpp, line 887
VIDEOIO(cvCreateCapture_GStreamer(CV_CAP_GSTREAMER_V4L2, reinterpret_cast<char *>(index))): raised OpenCV exception:

/home/xxx/Programs/OpenCV/src/opencv-master/modules/videoio/src/cap_gstreamer.cpp:887: error: (-2) GStreamer: unable to start pipeline
 in function cvCaptureFromCAM_GStreamer
cidc1ykv

cidc1ykv2#

我不清楚,但根据这份官方文档,它既关闭了IO设备,也释放了一个指针。因此,可以假设它释放了一定量的内存(无论多少)。更重要的是,我认为它将释放其他进程对设备/文件的访问。
关闭视频文件或捕获设备。
这些方法由后续的VideoCapture::open和VideoCapture析构函数自动调用。
C函数还释放内存并清除 * 捕获指针。

xkftehaa

xkftehaa3#

根据我的经验,如果您在笔记本电脑或其他机器人项目上使用实时摄像头,树莓派或其他,这是更有用的,在这种情况下,你真的需要释放videocapture对象,(和videowrite以及)以避免任何材质冲突。
你可以看到这是一种问题,在日常生活中使用笔记本电脑时,有时你需要关闭一个软件(从任务管理器),以便能够使用另一个。

o4tp2gmn

o4tp2gmn4#

正如上面提到的Kinght,它释放了硬件和软件资源。
但是,在新版本的openCV中,它在退出帧循环后自动执行。
根据使用Python的openCV的文档字符串,他们是这样说的:
release()-〉关闭视频文件或捕获设备。该方法被后续的VideoCapture::open和VideoCapture析构函数自动调用。C函数还释放内存并清除 *capture指针。
因此,我尝试了两次连续分配资源而不释放,效果很好,如下面的代码所示:

import cv2 as cv
cap = cv.VideoCapture('Resources/test1.mp4')
cap2 = cv.VideoCapture('Resources/test2.mp4')
while True:
    isSuccess1, img1 = cap.read()
    isSuccess2, img2 = cap2.read()
    print('isSuccess1', isSuccess1)
    print('isSuccess2', isSuccess2)
    cv.imshow('Video1', img1)
    cv.imshow('Video2', img2)

    if cv.waitKey(1) & 0XFF == ord('q'):
        break

和视频打开好,也在我的终端输出是:

isSuccess True
bxpogfeg

bxpogfeg5#

如何关闭相机窗口?

import cv2`
import numpy as np
import face_recognition
import os
from pandas import read_csv
from datetime import datetime
from csv import writer

print("##### Welcome to AUTO-ATTENDANCE #####\n")
path = 'Training_images'
images = []
classNames = []
myList = os.listdir(path)
print("Loading images from training images...")

for cl in myList:
    curImg = cv2.imread(f'{path}/{cl}')
    images.append(curImg)
    classNames.append(os.path.splitext(cl)[0])

def findEncodings(images):
    encodeList = []
    print("Encoding images for recognition...")
    for img in images:
        img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
        encode = face_recognition.face_encodings(img)[0]
        encodeList.append(encode)
    return encodeList

def markAttendance(name, status):
    data = read_csv('Attendance.csv')
    nameList = data['Name'].tolist()
    with open('Attendance.csv', 'a') as f:
        write = writer(f)
        if name not in nameList:
            now = datetime.now()
            dtString = now.strftime('%H:%M:%S')
            row = [name, dtString, status]
            write.writerow(row)
            nameList.append(name)
            print(f"Attendance for {name} has been marked as {status}!!")

encodeListKnown = findEncodings(images)
print('Encoding Complete!!\n')
cap = cv2.VideoCapture(0)

while True:
    success, img = cap.read()
    imgS = cv2.resize(img, (0, 0), None, 0.25, 0.25)
    imgS = cv2.cvtColor(imgS, cv2.COLOR_BGR2RGB)
    facesCurFrame = face_recognition.face_locations(imgS)
    encodesCurFrame = face_recognition.face_encodings(imgS,facesCurFrame)
    for encodeFace, faceLoc in zip(encodesCurFrame,facesCurFrame):
        matches = face_recognition.compare_faces(encodeListKnown, encodeFace)
        faceDis = face_recognition.face_distance(encodeListKnown, encodeFace)
        matchIndex = np.argmin(faceDis)
        if matches[matchIndex]:
            name = classNames[matchIndex].upper()
            y1, x2, y2, x1 = faceLoc
            y1, x2, y2, x1 = y1 * 4, x2 * 4, y2 * 4, x1 * 4
            cv2.rectangle(img, (x1, y1), (x2, y2), (0, 255, 0), 2)
            cv2.rectangle(img, (x1, y2 - 35), (x2, y2), (0, 255, 0), cv2.FILLED)
            cv2.putText(img, name, (x1 + 6, y2 - 6), cv2.FONT_HERSHEY_COMPLEX, 1, (255, 255, 255), 2) 
            markAttendance(name, 'Present')
            cv2.imshow('Webcam', img)
            key = cv2.waitKey(1) & 0xFF
            if key == ord('q'):
                data = read_csv('Attendance.csv')
                nameList = data['Name'].tolist()
                for name in classNames:
                    if name.upper() not in nameList:
                        markAttendance(name, 'Absent')
                break

相关问题