gopyfrb31#
我不是官方人员,可以试试这样:
assert isinstance(a, paddle.fluid.LodTensor) assert isinstance(b, paddle.fluid.LodTensor) return fluid.layers.elementwise_mul( a, fluid.layers.logical_not(b) )
dphi5xsq2#
@parap1uie-s 谢谢,我现在也是通过这种方式解决的
qpgpyjmq3#
感谢解答。 @parap1uie-s
ocebsuys4#
@gavin1332我把如下numpy代码改成paddle
# numpy conf[best_truth_overlap < neg_thresh] = 0 # paddle conf = fluid.layers.elementwise_mul( conf, fluid.layers.cast(best_truth_overlap > neg_thresh, 'int32') )
结果报错了
W0331 13:53:30.755509 6610 operator.cc:181] cast raises an exception thrust::system::system_error, parallel_for failed: invalid configuration argument F0331 13:53:30.755676 6610 exception_holder.h:37] std::exception caught, parallel_for failed: invalid configuration argument ***Check failure stack trace:*** W0331 13:53:30.757980 6609 operator.cc:181] elementwise_add raises an exception thrust::system::system_error, parallel_for failed: invalid configuration argument F0331 13:53:30.758033 6609 exception_holder.h:37] std::exception caught, parallel_for failed: invalid configuration argument ***Check failure stack trace:*** @ 0x7f562c55ffed google::LogMessage::Fail() @ 0x7f562c55ffed google::LogMessage::Fail() @ 0x7f562c563a9c google::LogMessage::SendToLog() @ 0x7f562c563a9c google::LogMessage::SendToLog() @ 0x7f562c55fb13 google::LogMessage::Flush() @ 0x7f562c55fb13 google::LogMessage::Flush() @ 0x7f562c564fae google::LogMessageFatal::~LogMessageFatal() @ 0x7f562c564fae google::LogMessageFatal::~LogMessageFatal() @ 0x7f562ea29748 paddle::framework::details::ExceptionHolder::Catch() @ 0x7f562ea29748 paddle::framework::details::ExceptionHolder::Catch() @ 0x7f562ead4b6e paddle::framework::details::FastThreadedSSAGraphExecutor::RunOpSync() @ 0x7f562ead4b6e paddle::framework::details::FastThreadedSSAGraphExecutor::RunOpSync() @ 0x7f562ead377f paddle::framework::details::FastThreadedSSAGraphExecutor::RunOp() @ 0x7f562ead3a44 _ZNSt17_Function_handlerIFvvESt17reference_wrapperISt12_Bind_simpleIFS1_ISt5_BindIFZN6paddle9framework7details28FastThreadedSSAGraphExecutor10RunOpAsyncEPSt13unordered_mapIPNS6_12OpHandleBaseESt6atomicIiESt4hashISA_ESt8equal_toISA_ESaISt4pairIKSA_SC_EEESA_RKSt10shared_ptrINS5_13BlockingQueueImEEEEUlvE_vEEEvEEEE9_M_invokeERKSt9_Any_data @ 0x7f562ead377f paddle::framework::details::FastThreadedSSAGraphExecutor::RunOp() @ 0x7f562ead3a44 _ZNSt17_Function_handlerIFvvESt17reference_wrapperISt12_Bind_simpleIFS1_ISt5_BindIFZN6paddle9framework7details28FastThreadedSSAGraphExecutor10RunOpAsyncEPSt13unordered_mapIPNS6_12OpHandleBaseESt6atomicIiESt4hashISA_ESt8equal_toISA_ESaISt4pairIKSA_SC_EEESA_RKSt10shared_ptrINS5_13BlockingQueueImEEEEUlvE_vEEEvEEEE9_M_invokeERKSt9_Any_data @ 0x7f562c5b8d43 std::_Function_handler<>::_M_invoke() @ 0x7f562c5b8d43 std::_Function_handler<>::_M_invoke() @ 0x7f562c348537 std::__future_base::_State_base::_M_do_set() @ 0x7f5692ac6a99 __pthread_once_slow @ 0x7f562c348537 std::__future_base::_State_base::_M_do_set() @ 0x7f562eacef32 _ZNSt13__future_base11_Task_stateISt5_BindIFZN6paddle9framework7details28FastThreadedSSAGraphExecutor10RunOpAsyncEPSt13unordered_mapIPNS4_12OpHandleBaseESt6atomicIiESt4hashIS8_ESt8equal_toIS8_ESaISt4pairIKS8_SA_EEES8_RKSt10shared_ptrINS3_13BlockingQueueImEEEEUlvE_vEESaIiEFvvEE6_M_runEv @ 0x7f5692ac6a99 __pthread_once_slow @ 0x7f562eacef32 _ZNSt13__future_base11_Task_stateISt5_BindIFZN6paddle9framework7details28FastThreadedSSAGraphExecutor10RunOpAsyncEPSt13unordered_mapIPNS4_12OpHandleBaseESt6atomicIiESt4hashIS8_ESt8equal_toIS8_ESaISt4pairIKS8_SA_EEES8_RKSt10shared_ptrINS3_13BlockingQueueImEEEEUlvE_vEESaIiEFvvEE6_M_runEv @ 0x7f562c34a764 _ZZN10ThreadPoolC1EmENKUlvE_clEv @ 0x7f56534f6421 execute_native_thread_routine_compat @ 0x7f562c34a764 _ZZN10ThreadPoolC1EmENKUlvE_clEv @ 0x7f5692abf6ba start_thread @ 0x7f56534f6421 execute_native_thread_routine_compat @ 0x7f56927f541d clone @ 0x7f5692abf6ba start_thread @ (nil) (unknown) Aborted (core dumped)
4条答案
按热度按时间gopyfrb31#
我不是官方人员,可以试试这样:
dphi5xsq2#
@parap1uie-s 谢谢,我现在也是通过这种方式解决的
qpgpyjmq3#
感谢解答。 @parap1uie-s
ocebsuys4#
@gavin1332
我把如下numpy代码改成paddle
结果报错了