在python opencv中通过网络发送实时视频帧

b1zrtrql  于 2022-11-24  发布在  Python
关注(0)|答案(7)|浏览(129)

我正在尝试将我用相机捕捉到的实时视频帧发送到服务器并进行处理。我使用opencv进行图像处理,使用python进行语言处理。以下是我的代码

一月一日

import cv2
import numpy as np
import socket
import sys
import pickle
cap=cv2.VideoCapture(0)
clientsocket=socket.socket(socket.AF_INET,socket.SOCK_STREAM)
clientsocket.connect(('localhost',8089))
while True:
    ret,frame=cap.read()
    print sys.getsizeof(frame)
    print frame
    clientsocket.send(pickle.dumps(frame))

一个月一个月

import socket
import sys
import cv2
import pickle
import numpy as np
HOST=''
PORT=8089

s=socket.socket(socket.AF_INET,socket.SOCK_STREAM)
print 'Socket created'

s.bind((HOST,PORT))
print 'Socket bind complete'
s.listen(10)
print 'Socket now listening'

conn,addr=s.accept()

while True:
    data=conn.recv(80)
    print sys.getsizeof(data)
    frame=pickle.loads(data)
    print frame
    cv2.imshow('frame',frame)

这段代码给了我文件结束错误,这是合乎逻辑的,因为数据总是不断地进入服务器,pickle不知道什么时候结束。我在互联网上的搜索使我使用pickle,但它目前还不工作。

注意:我将conn.recv设置为80,因为当我说print sys.getsizeof(frame)时,得到的数字就是80。

bis0qfac

bis0qfac1#

几件事:

  • 使用sendall而不是send,因为不能保证一次发送所有内容
  • pickle对于数据序列化是可以的,但是你必须为你在客户端和服务器之间交换的消息制定一个你自己的协议,这样你就可以提前知道要读取的数据量以进行解pickle(见下文)
  • 对于recv,如果接收大块,您将获得更好的性能,因此将80替换为4096或更多
  • 注意sys.getsizeof:它返回内存中对象的大小,该大小与通过网络发送的字节的大小(长度)不同;对于Python字符串,这两个值完全不同
  • 请注意您要发送的帧的大小。2下面的代码支持最大为65535的帧。3如果您有更大的帧,请将“H”更改为“L”。

协议示例:
客户端_cv.py

import cv2
import numpy as np
import socket
import sys
import pickle
import struct ### new code
cap=cv2.VideoCapture(0)
clientsocket=socket.socket(socket.AF_INET,socket.SOCK_STREAM)
clientsocket.connect(('localhost',8089))
while True:
    ret,frame=cap.read()
    data = pickle.dumps(frame) ### new code
    clientsocket.sendall(struct.pack("H", len(data))+data) ### new code

服务器_cv.py

import socket
import sys
import cv2
import pickle
import numpy as np
import struct ## new

HOST=''
PORT=8089

s=socket.socket(socket.AF_INET,socket.SOCK_STREAM)
print('Socket created')

s.bind((HOST,PORT))
print('Socket bind complete')
s.listen(10)
print('Socket now listening')

conn,addr=s.accept()

### new
data = ""
payload_size = struct.calcsize("H") 
while True:
    while len(data) < payload_size:
        data += conn.recv(4096)
    packed_msg_size = data[:payload_size]
    data = data[payload_size:]
    msg_size = struct.unpack("H", packed_msg_size)[0]
    while len(data) < msg_size:
        data += conn.recv(4096)
    frame_data = data[:msg_size]
    data = data[msg_size:]
    ###

    frame=pickle.loads(frame_data)
    print frame
    cv2.imshow('frame',frame)

您可能可以对所有这些进行大量优化(减少复制,使用缓冲区接口等),但至少您可以了解这些想法。

vtwuwzda

vtwuwzda2#

经过几个月的互联网搜索,这是我想出的,我已经整齐地 Package 成类,与单元测试和文档作为SmoothStream检查出来,这是唯一的简单和工作版本的流我可以找到任何地方。
我用这个代码把我的代码包起来。
Viewer.py

import cv2
import zmq
import base64
import numpy as np

context = zmq.Context()
footage_socket = context.socket(zmq.SUB)
footage_socket.bind('tcp://*:5555')
footage_socket.setsockopt_string(zmq.SUBSCRIBE, np.unicode(''))

while True:
    try:
        frame = footage_socket.recv_string()
        img = base64.b64decode(frame)
        npimg = np.fromstring(img, dtype=np.uint8)
        source = cv2.imdecode(npimg, 1)
        cv2.imshow("Stream", source)
        cv2.waitKey(1)

    except KeyboardInterrupt:
        cv2.destroyAllWindows()
        break

Streamer.py

import base64
import cv2
import zmq

context = zmq.Context()
footage_socket = context.socket(zmq.PUB)
footage_socket.connect('tcp://localhost:5555')

camera = cv2.VideoCapture(0)  # init the camera

while True:
    try:
        grabbed, frame = camera.read()  # grab the current frame
        frame = cv2.resize(frame, (640, 480))  # resize the frame
        encoded, buffer = cv2.imencode('.jpg', frame)
        jpg_as_text = base64.b64encode(buffer)
        footage_socket.send(jpg_as_text)

    except KeyboardInterrupt:
        camera.release()
        cv2.destroyAllWindows()
        break
uemypmqf

uemypmqf3#

我修改了@mguijarr的代码,以便使用Python 3。

  • data现在是byte literal而不是字符串文字
  • 将“H”更改为“L”以发送更大的帧大小。基于the documentation,我们现在可以发送大小为2^32的帧,而不仅仅是2^16。

Server.py

import pickle
import socket
import struct

import cv2

HOST = ''
PORT = 8089

s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
print('Socket created')

s.bind((HOST, PORT))
print('Socket bind complete')
s.listen(10)
print('Socket now listening')

conn, addr = s.accept()

data = b'' ### CHANGED
payload_size = struct.calcsize("L") ### CHANGED

while True:

    # Retrieve message size
    while len(data) < payload_size:
        data += conn.recv(4096)

    packed_msg_size = data[:payload_size]
    data = data[payload_size:]
    msg_size = struct.unpack("L", packed_msg_size)[0] ### CHANGED

    # Retrieve all data based on message size
    while len(data) < msg_size:
        data += conn.recv(4096)

    frame_data = data[:msg_size]
    data = data[msg_size:]

    # Extract frame
    frame = pickle.loads(frame_data)

    # Display
    cv2.imshow('frame', frame)
    cv2.waitKey(1)

Client.py

import cv2
import numpy as np
import socket
import sys
import pickle
import struct

cap=cv2.VideoCapture(0)
clientsocket=socket.socket(socket.AF_INET,socket.SOCK_STREAM)
clientsocket.connect(('localhost',8089))

while True:
    ret,frame=cap.read()
    # Serialize frame
    data = pickle.dumps(frame)

    # Send message length first
    message_size = struct.pack("L", len(data)) ### CHANGED

    # Then data
    clientsocket.sendall(message_size + data)
zkure5ic

zkure5ic4#

正如@罗汉Sawant所说,我使用了zmq库而没有使用base64编码。下面是新的代码
Streamer.py

import base64
import cv2
import zmq
import numpy as np
import time

context = zmq.Context()
footage_socket = context.socket(zmq.PUB)
footage_socket.connect('tcp://192.168.1.3:5555')

camera = cv2.VideoCapture(0)  # init the camera

while True:
        try:
                grabbed, frame = camera.read()  # grab the current frame
                frame = cv2.resize(frame, (640, 480))  # resize the frame
                encoded, buffer = cv2.imencode('.jpg', frame)
                footage_socket.send(buffer)

        except KeyboardInterrupt:
                camera.release()
                cv2.destroyAllWindows()
                break

Viewer.py

import cv2
import zmq
import base64
import numpy as np

context = zmq.Context()
footage_socket = context.socket(zmq.SUB)
footage_socket.bind('tcp://*:5555')
footage_socket.setsockopt_string(zmq.SUBSCRIBE, np.unicode(''))

while True:
    try:
        frame = footage_socket.recv()
        npimg = np.frombuffer(frame, dtype=np.uint8)
        #npimg = npimg.reshape(480,640,3)
        source = cv2.imdecode(npimg, 1)
        cv2.imshow("Stream", source)
        cv2.waitKey(1)

    except KeyboardInterrupt:
        cv2.destroyAllWindows()
        break
8mmmxcuj

8mmmxcuj5#

我有点晚了,但我强大的线程化VidGear视频处理python库现在提供了NetGear API,它专门设计用于通过网络在互连系统之间实时同步传输视频帧。

A.服务器端:(最低限度示例)

打开您喜欢的终端并执行以下python代码:

**注意:**您可以在服务器端按下键盘上的[Ctrl+c],随时结束服务器端和客户端上的流!

# import libraries
from vidgear.gears import VideoGear
from vidgear.gears import NetGear

stream = VideoGear(source='test.mp4').start() #Open any video stream
server = NetGear() #Define netgear server with default settings

# infinite loop until [Ctrl+C] is pressed
while True:
    try: 
        frame = stream.read()
        # read frames

        # check if frame is None
        if frame is None:
            #if True break the infinite loop
            break

        # do something with frame here

        # send frame to server
        server.send(frame)
    
    except KeyboardInterrupt:
        #break the infinite loop
        break

# safely close video stream
stream.stop()
# safely close server
server.close()

B.客户端:(最低限度示例)

然后打开同一系统上的另一个终端,执行以下python代码并查看输出:

# import libraries
from vidgear.gears import NetGear
import cv2

#define netgear client with `receive_mode = True` and default settings
client = NetGear(receive_mode = True)

# infinite loop
while True:
    # receive frames from network
    frame = client.recv()

    # check if frame is None
    if frame is None:
        #if True break the infinite loop
        break

    # do something with frame here

    # Show output window
    cv2.imshow("Output Frame", frame)

    key = cv2.waitKey(1) & 0xFF
    # check for 'q' key-press
    if key == ord("q"):
        #if 'q' key-pressed break out
        break

# close output window
cv2.destroyAllWindows()
# safely close client
client.close()

更多高级用法和相关文档可在此处找到:https://github.com/abhiTronix/vidgear/wiki/NetGear

z2acfund

z2acfund6#

最近我发布了imagiz包快速和无阻塞的网络直播视频流与OpenCV和ZMQ。
https://pypi.org/project/imagiz/
委托单位:

import imagiz
import cv2

client=imagiz.Client("cc1",server_ip="localhost")
vid=cv2.VideoCapture(0)
encode_param = [int(cv2.IMWRITE_JPEG_QUALITY), 90]

while True:
    r,frame=vid.read()
    if r:
        r, image = cv2.imencode('.jpg', frame, encode_param)
        client.send(image)
    else:
        break

伺服器:

import imagiz
import cv2

server=imagiz.Server()
while True:
    message=server.recive()
    frame=cv2.imdecode(message.image,1)
    cv2.imshow("",frame)
    cv2.waitKey(1)
elcex8rz

elcex8rz7#

我已经在我的MacOS上工作了。
我使用了@mguijarr中的代码,并将struct.pack从“H”更改为“L”。
第一个

相关问题