Paddle 可以多个predictor同时预测吗?

ukqbszuj  于 2022-04-21  发布在  Java
关注(0)|答案(2)|浏览(213)

为使您的问题得到快速解决,在建立Issue前,请您先通过如下方式搜索是否有相似问题:【搜索issue关键字】【使用labels筛选】【官方文档】

  • 标题:用create_paddle_predictor多次创建predictor,在同一个data_generator下run报错
  • 版本、环境信息:

   1)PaddlePaddle版本:paddle fluid 1.5.1
   2)GPU:预测若用GPU,请提供GPU型号、CUDA和CUDNN版本号
   4)系统环境:cuda9,Python版本 python3.6
注:您可以通过执行summary_env.py获取以上信息。
简而言之,由于需要创建2个predictor用2个模型预测不同的任务,create_paddle_predictor了2次,两个predictor在同一个predict_data_generator下run了,以便一起获取结果。不知这样是否可以?(是需要创建2个fluid.Executor还是共用1个就好?还是不能2个模型一起预测?
错误代码:
predict_prog = fluid.Program()
predict_startup = fluid.Program()
with fluid.program_guard(predict_prog, predict_startup):
with fluid.unique_name.guard():
ret = create_model(
args,
pyreader_name='predict_reader',
ernie_config=ernie_config,
is_classify=True,
is_prediction=True,
)
predict_pyreader = ret['pyreader']
left_score = ret['left_probs']
#right_score = ret['right_probs']
type_probs = ret['type_probs']
feed_targets_name = ret['feed_targets_name']

predict_prog = predict_prog.clone(for_test=True)

if args.use_cuda:
dev_list = fluid.cuda_places()
place = dev_list[0]
print('----------place-----------')
print(place)
dev_count = len(dev_list)
else:
place = fluid.CPUPlace()
dev_count = int(os.environ.get('CPU_NUM', multiprocessing.cpu_count()))

place = fluid.CUDAPlace(0) if args.use_cuda == True else fluid.CPUPlace()
exe = fluid.Executor(place)
exe.run(predict_startup)

if args.init_checkpoint:
init_pretraining_params(exe, args.init_checkpoint, predict_prog)
else:
raise ValueError(
"args 'init_checkpoint' should be set for prediction!",
)

assert args.save_inference_model_path,
"args save_inference_model_path should be set for prediction"
_, ckpt_dir = os.path.split(args.init_checkpoint.rstrip('/'))
dir_name = ckpt_dir + '_inference_model'
model_path = os.path.join(args.save_inference_model_path, dir_name)
log.info("save inference model to %s" % model_path)
log.info("feed_targets_name %s" % feed_targets_name)
#feed_targets_name.remove('read_file_0.tmp_3')
#feed_targets_name.remove('read_file_0.tmp_8')
#feed_targets_name.remove('read_file_0.tmp_12')

fluid.io.save_inference_model(
model_path,
feed_targets_name,
[left_score, type_probs],
#[left_score, right_score, type_probs],
exe,
main_program=predict_prog,
)

save score reference model====================

predict_prog = fluid.Program()
predict_startup = fluid.Program()
with fluid.program_guard(predict_prog, predict_startup):
with fluid.unique_name.guard():
ret = create_score_model(
args,
pyreader_name='predict_score_reader',
ernie_config=ernie_config,
is_classify=True,
is_prediction=True,
)
predict_pyreader = ret['pyreader']
left_score = ret['left_probs']
#right_score = ret['right_probs']
#type_probs = ret['type_probs']
feed_targets_name = ret['feed_targets_name']

predict_prog = predict_prog.clone(for_test=True)

if args.use_cuda:
dev_list = fluid.cuda_places()
place = dev_list[0]
print('----------place-----------')
print(place)
dev_count = len(dev_list)
else:
place = fluid.CPUPlace()
dev_count = int(os.environ.get('CPU_NUM', multiprocessing.cpu_count()))

place = fluid.CUDAPlace(1) if args.use_cuda == True else fluid.CPUPlace()
exe1 = fluid.Executor(place)
exe1.run(predict_startup)

if args.init_checkpoint:
init_pretraining_params(exe, args.init_score_checkpoint, predict_prog)
else:
raise ValueError(
"args 'init_checkpoint' should be set for prediction!",
)

assert args.save_score_inference_model_path,
"args save_score_inference_model_path should be set for prediction"
_, ckpt_dir = os.path.split(args.init_score_checkpoint.rstrip('/'))
dir_name = ckpt_dir + '_inference_model'
score_model_path = os.path.join(args.save_score_inference_model_path, dir_name)
log.info("save score inference model to %s" % score_model_path)
log.info("feed_targets_name %s" % feed_targets_name)
#feed_targets_name.remove('read_file_0.tmp_3')
#feed_targets_name.remove('read_file_0.tmp_8')
#feed_targets_name.remove('read_file_0.tmp_12')

fluid.io.save_inference_model(
score_model_path,
feed_targets_name,
[left_score, type_probs],
#[left_score, right_score, type_probs],
exe,
main_program=predict_prog,
)
#================================================================

config = AnalysisConfig(model_path)
score_config = AnalysisConfig(score_model_path)
if not args.use_cuda:
log.info("disable gpu")
config.disable_gpu()
else:
log.info("using gpu")
config.enable_use_gpu(1024)

if not args.use_cuda:
log.info("disable gpu")
score_config.disable_gpu()
else:
log.info("using gpu")
score_config.enable_use_gpu(1024)

Create PaddlePredictor

predictor = create_paddle_predictor(config)
score_predictor = create_paddle_predictor(score_config)

predict_data_generator = reader.data_generator(
input_file=args.predict_set,
batch_size=args.batch_size,
epoch=1,
shuffle=False,
)

log.info("-------------- prediction results --------------")
np.set_printoptions(precision=4, suppress=True)
index = 0
total_time = 0
qid_total = None
left_score_total = None
#right_score_total = None
type_prob_total = None
ent_id_total = None
for sample in predict_data_generator():
src_ids_1 = sample[0]
sent_ids_1 = sample[1]
pos_ids_1 = sample[2]
task_ids_1 = sample[3]
input_mask_1 = sample[4]
src_ids_2 = sample[5]
sent_ids_2 = sample[6]
pos_ids_2 = sample[7]
task_ids_2 = sample[8]
input_mask_2 = sample[9]
#src_ids_3 = sample[10]
#sent_ids_3 = sample[11]
#pos_ids_3 = sample[12]
#task_ids_3 = sample[13]
#input_mask_3 = sample[14]
#qids = sample[15]
#ent_ids = sample[16]
qids = sample[10]
ent_ids = sample[11]
inputs = [array2tensor(ndarray) for ndarray in [
src_ids_1, sent_ids_1, pos_ids_1, task_ids_1, input_mask_1,
src_ids_2, sent_ids_2, pos_ids_2, task_ids_2, input_mask_2,
#src_ids_3, sent_ids_3, pos_ids_3, task_ids_3, input_mask_3,
qids,
]]
#print('inputs',inputs)
begin_time = time.time()
outputs = predictor.run(inputs)
score_outputs = score_predictor.run(inputs)

c++ 报错
2020-09-01 16:29:05 Traceback (most recent call last):
2020-09-01 16:29:05 File "/media/cfs/liuhongru3/Research-master/KG/DuEL_Baseline/ernie/infer_type_ranker.py", line 448, in
2020-09-01 16:29:05 main(args)
2020-09-01 16:29:05 File "/media/cfs/liuhongru3/Research-master/KG/DuEL_Baseline/ernie/infer_type_ranker.py", line 308, in main
2020-09-01 16:29:05 score_outputs = score_predictor.run(inputs)
2020-09-01 16:29:05 paddle.fluid.core_avx.EnforceNotMet: Invoke operator scale error.
2020-09-01 16:29:05 Python Callstacks:
2020-09-01 16:29:05 File "/usr/local/anaconda3/lib/python3.6/site-packages/paddle/fluid/framework.py", line 1771, in append_op
2020-09-01 16:29:05 attrs=kwargs.get("attrs", None))
2020-09-01 16:29:05 File "/usr/local/anaconda3/lib/python3.6/site-packages/paddle/fluid/layer_helper.py", line 43, in append_op
2020-09-01 16:29:05 return self.main_program.current_block().append_op(args,kwargs)
2020-09-01 16:29:05 File "/usr/local/anaconda3/lib/python3.6/site-packages/paddle/fluid/layers/nn.py", line 9947, in scale
2020-09-01 16:29:05 'bias_after_scale': bias_after_scale
2020-09-01 16:29:05 File "/usr/local/anaconda3/lib/python3.6/site-packages/paddle/fluid/io.py", line 1026, in save_inference_model
2020-09-01 16:29:05 var, 1., name="save_infer_model/scale_{}".format(i))
2020-09-01 16:29:05 File "/media/cfs/liuhongru3/Research-master/KG/DuEL_Baseline/ernie/infer_type_ranker.py", line 240, in main
2020-09-01 16:29:05 main_program=predict_prog,
2020-09-01 16:29:05 File "/media/cfs/liuhongru3/Research-master/KG/DuEL_Baseline/ernie/infer_type_ranker.py", line 448, in
2020-09-01 16:29:05 main(args)
2020-09-01 16:29:05 C++ Callstacks:
2020-09-01 16:29:05 Input X(0) is not initialized at [/paddle/paddle/fluid/framework/operator.cc:1146]
2020-09-01 16:29:05 PaddlePaddle Call Stacks:
2020-09-01 16:29:05 0 0x7f3d5d697760p void paddle::platform::EnforceNotMet::Init<char const
>(char const
, char const
, int) + 352
2020-09-01 16:29:05 1 0x7f3d5d697ad9p paddle::platform::EnforceNotMet::EnforceNotMet(std::exception_ptr::exception_ptr, char const, int) + 137
2020-09-01 16:29:05 2 0x7f3d5f5fa34fp paddle::framework::OperatorWithKernel::IndicateDataType(paddle::framework::ExecutionContext const&) const + 1343
2020-09-01 16:29:05 3 0x7f3d5f5fa53fp paddle::framework::OperatorWithKernel::GetExpectedKernelType(paddle::framework::ExecutionContext const&) const + 47
2020-09-01 16:29:05 4 0x7f3d5f5fbdfbp paddle::framework::OperatorWithKernel::ChooseKernel(paddle::framework::RuntimeContext const&, paddle::framework::Scope const&, boost::variant<paddle::platform::CUDAPlace, paddle::platform::CPUPlace, paddle::platform::CUDAPinnedPlace, boost::detail::variant::void
, boost::detail::variant::void*, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_> const&) const + 235
2020-09-01 16:29:05 5 0x7f3d5f5fdf68p paddle::framework::OperatorWithKernel::RunImpl(paddle::framework::Scope const&, boost::variant<paddle::platform::CUDAPlace, paddle::platform::CPUPlace, paddle::platform::CUDAPinnedPlace, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_> const&, paddle::framework::RuntimeContext*) const + 728
2020-09-01 16:29:05 6 0x7f3d5f5fe0f3p paddle::framework::OperatorWithKernel::RunImpl(paddle::framework::Scope const&, boost::variant<paddle::platform::CUDAPlace, paddle::platform::CPUPlace, paddle::platform::CUDAPinnedPlace, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_> const&) const + 291
2020-09-01 16:29:05 7 0x7f3d5f5fb7dcp paddle::framework::OperatorBase::Run(paddle::framework::Scope const&, boost::variant<paddle::platform::CUDAPlace, paddle::platform::CPUPlace, paddle::platform::CUDAPinnedPlace, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_, boost::detail::variant::void_> const&) + 332
2020-09-01 16:29:05 8 0x7f3d5f5b1290p paddle::framework::NaiveExecutor::Run() + 560
2020-09-01 16:29:05 9 0x7f3d5d8a3ce8p paddle::AnalysisPredictor::Run(std::vector<paddle::PaddleTensor, std::allocatorpaddle::PaddleTensor > const&, std::vector<paddle::PaddleTensor, std::allocatorpaddle::PaddleTensor >*, int) + 184
2020-09-01 16:29:05 10 0x7f3d5d7b4426p
2020-09-01 16:29:05 11 0x7f3d5d6ca076p
2020-09-01 16:29:05 12 0x557d715c4fd4p _PyCFunction_FastCallDict + 340
2020-09-01 16:29:05 13 0x557d71652d3ep
2020-09-01 16:29:05 14 0x557d7167719ap _PyEval_EvalFrameDefault + 778
2020-09-01 16:29:05 15 0x557d7164c7dbp
2020-09-01 16:29:05 16 0x557d71652cc5p
2020-09-01 16:29:05 17 0x557d7167719ap _PyEval_EvalFrameDefault + 778
2020-09-01 16:29:05 18 0x557d7164d529p PyEval_EvalCodeEx + 809
2020-09-01 16:29:05 19 0x557d7164e2ccp PyEval_EvalCode + 28
2020-09-01 16:29:05 20 0x557d716caaf4p
2020-09-01 16:29:05 21 0x557d716caef1p PyRun_FileExFlags + 161
2020-09-01 16:29:05 22 0x557d716cb0f4p PyRun_SimpleFileExFlags + 452
2020-09-01 16:29:05 23 0x557d716cec28p Py_Main + 1608
2020-09-01 16:29:05 24 0x557d7159671ep main + 238
2020-09-01 16:29:05 25 0x7f3e57a0a3d5p __libc_start_main + 245
2020-09-01 16:29:05 26 0x557d7167dc98p
2020-09-01 16:29:05

5cnsuln7

5cnsuln71#

你好,请问你的问题是指同时创建多个predictor,同时预测出现错误吗
PaddlePaddle可以同时创建多个Predictor,同时预测。 而且Predictor的预测过程线程安全、建议将多个predictor分别放入多个线程内执行以实现同时预测

js81xvg6

js81xvg62#

你好,请问你的问题是指同时创建多个predictor,同时预测出现错误吗
PaddlePaddle可以同时创建多个Predictor,同时预测。 而且Predictor的预测过程线程安全、建议将多个predictor分别放入多个线程内执行以实现同时预测

多谢,已经找到问题原因了。

相关问题