好吧,我的目标很简单--尝试从带有压缩/交错BGR数据(也可以是RGB)的缓冲区创建JPEG编码图像。
NVidia文档包含一个示例,此处基本描述了正确的图像输入。
所以我尝试了以下方法:
#include <nvjpeg.h>
// very simple
typedef struct {
int width;
int height;
unsigned char *buffer;
unsigned long data_size;
} my_bitmap_type;
std::vector<unsigned char> BitmapToJpegCUDA(const my_bitmap_type *image)
{
nvjpegHandle_t nv_handle;
nvjpegEncoderState_t nv_enc_state;
nvjpegEncoderParams_t nv_enc_params;
cudaStream_t stream = NULL;
nvjpegStatus_t er;
nvjpegCreateSimple(&nv_handle);
nvjpegEncoderStateCreate(nv_handle, &nv_enc_state, stream);
nvjpegEncoderParamsCreate(nv_handle, &nv_enc_params, stream);
nvjpegImage_t nv_image;
nv_image.channel[0] = image->buffer;
nv_image.pitch[0] = 3 * image->width;
// Nope, that's for planar images!
// nv_image.channel[0] = image->buffer;
// nv_image.channel[1] = image->buffer + image->width * image->height;
// nv_image.channel[2] = image->buffer + 2 * image->width * image->height;
// nv_image.pitch[0] = image->width;
// nv_image.pitch[1] = image->width;
// nv_image.pitch[2] = image->width;
er = nvjpegEncodeImage(nv_handle, nv_enc_state, nv_enc_params, &nv_image,
NVJPEG_INPUT_BGRI, image->width, image->height, stream);
LOG(ERROR) << "enc " << er;
size_t length = 0;
nvjpegEncodeRetrieveBitstream(nv_handle, nv_enc_state, NULL, &length, stream);
cudaStreamSynchronize(stream);
std::vector<unsigned char> jpeg(length);
nvjpegEncodeRetrieveBitstream(nv_handle, nv_enc_state, jpeg.data(), &length, 0);
nvjpegEncoderParamsDestroy(nv_enc_params);
nvjpegEncoderStateDestroy(nv_enc_state);
nvjpegDestroy(nv_handle);
return jpeg;
}
日志记录器显示nvjpegEncodeImage
只返回NVJPEG_STATUS_INVALID_PARAMETER
,这意味着没有任何结果。如果您怀疑my_bitmap_type
填错了,下面是类似的turbojpeg支持的编码:
#include <turbojpeg.h>
std::vector<unsigned char> BitmapToJpegBuffer(const my_bitmap_type *image)
{
std::vector<unsigned char> out_data(3 * image->width * image->height);
cudaError_t err = cudaMemcpy(out_data.data(), image->buffer, image->data_size, cudaMemcpyDeviceToHost);
if (cudaSuccess != err) {
LOG(ERROR) << "failed to copy CUDA memory: " << err;
}
tjhandle jpeg = tjInitCompress();
unsigned char *encoded_buf = nullptr;
long unsigned int encoded_sz = 0;
int tjres = tjCompress2(jpeg,
out_data.data(),
image->width,
image->width * 3,
image->height,
TJPF_BGR,
&encoded_buf,
&encoded_sz,
TJSAMP_444,
95,
TJFLAG_FASTDCT);
if (tjres != 0) {
LOG(ERROR) << "jpeg compession failed!";
return {};
}
std::vector<unsigned char> result(encoded_buf, encoded_buf + encoded_sz);
tjFree(encoded_buf);
tjDestroy(jpeg);
return result;
}
......而且效果很好。
我绝望地试图找出,什么是在代码中丢失。将不胜感激的任何帮助或建议。
- 统一产品开发:**使用操作系统中心7/libnvjpeg-11 - 1.x86_64(CUDA 11.1)/gcc 4.8.5
1条答案
按热度按时间rqqzpn5f1#
好吧,这有点奇怪,但经过一段时间的试验和错误,它发生了NVidia的文档缺乏一个基本的细节:
虽然文档明确指出JPEG压缩的默认二次采样是4:4:4,但编码不能使用默认的编码器参数,二次采样必须显式设置。
所以,这一行代码修复了一切。