Paddle jetson nano 编译 paddle 2.0报错 编译 activation_op.cu.o 出现core dumped

xjreopfe  于 2021-11-30  发布在  Java
关注(0)|答案(14)|浏览(465)

参考https://blog.csdn.net/weixin_45449540/article/details/107704028 教程
make 成功, 其余的流程全部走成功 但是 走到编译

activation_op.dir/activation_op.cu.o

报错 error 139 并且 core dumped 试过很多次

swap 也分配了20g

NAME          TYPE        SIZE   USED PRIO
/var/swapfile file         20G  19.3M   -1
/dev/zram0    partition 495.4M 146.4M    5
/dev/zram1    partition 495.4M 146.8M    5
/dev/zram2    partition 495.4M   147M    5
/dev/zram3    partition 495.4M 146.7M    5

但是在编译 总是出错。出现 core dumped 问题 卡了好几天了.

也将make -j4 变成 make -j1 还是不行

[ 30%] Building CUDA object paddle/fluid/operators/CMakeFiles/activation_op.dir/activation_op.cu.o
/activation_op.dir/activation_op.cu.oSegmentation fault (core dumped)

paddle/fluid/operators/CMakeFiles/activation_op.dir/build.make:110: recipe for target 'paddle/fluid/operators/CMakeFiles/activation_op.dir/activation_op.cu.o' failed

make[2]:***[paddle/fluid/operators/CMakeFiles/activation_op.dir/activation_op.cu.o] Error 139
CMakeFiles/Makefile2:48037: recipe for target 'paddle/fluid/operators/CMakeFiles/activation_op.dir/all' failed

make[1]:***[paddle/fluid/operators/CMakeFiles/activation_op.dir/all] Error 2
Makefile:129: recipe for target 'all' failed
tct7dpnv

tct7dpnv1#

您好,我们已经收到了您的问题,会安排技术人员尽快解答您的问题,请耐心等待。请您再次检查是否提供了清晰的问题描述、复现代码、环境&版本、报错信息等。同时,您也可以通过查看官网API文档常见问题历史IssueAI社区来寻求解答。祝您生活愉快~

Hi! We've received your issue and please be patient to get responded. We will arrange technicians to answer your questions as soon as possible. Please make sure that you have posted enough message to demo your request. You may also check out the APIFAQGithub Issue and AI community to get the answer.Have a nice day!

qvtsj1bj

qvtsj1bj2#

您好,您编译时的报错可能发生在较前的位置,可否发下完整的log

wn9m85ua

wn9m85ua3#

请问下,完整的编译log在哪里看?我试着导出来。 这个139的error信息里没有写 log地址

50pmv0ei

50pmv0ei4#

nohup make -j4 &

可以将完整log保存在nohup.out里

vq8itlhq

vq8itlhq5#

日志:

[  1%] Built target extern_zlib
[  1%] Built target extern_protobuf
[  1%] Built target extern_gflags
[  1%] copy_if_different /root/Paddle/build/paddle/fluid/operators/jit/kernels.h
[  1%] Built target copy_kernels_command
[  1%] copy_if_different /root/Paddle/build/paddle/fluid/inference/api/paddle_inference_pass.h
[  1%] Built target copy_paddle_inference_pass_command
[  1%] copy_if_different /root/Paddle/build/paddle/fluid/pybind/pybind.h
[  1%] Built target copy_pybind_command
[  2%] Built target extern_glog
[  3%] Built target framework_proto
[  3%] Built target flags
[  3%] Built target stringpiece
[  3%] Built target extern_eigen3
[  3%] Built target extern_boost
[  3%] Built target extern_threadpool
[  4%] Built target extern_dlpack
[  4%] Built target error_codes_proto
[  4%] Built target cuda_error_proto
[  4%] Built target profiler_proto
[  4%] Built target data_feed_proto
[  4%] Built target trainer_desc_proto
[  4%] Built target heter_service_proto
[  4%] Built target extern_xxhash
[  4%] Built target extern_openblas
[  4%] Built target cblas
[  5%] Built target errors
[  5%] Built target enforce
[  5%] Built target dynamic_loader
[  5%] Built target dynload_cuda
[  5%] Built target monitor
[  5%] Built target gpu_info
[  5%] Built target cpu_info
[  6%] Built target place
[  6%] Built target system_allocator
[  6%] Built target memory_block
[  6%] Built target buddy_allocator
[  6%] Built target allocator
[  6%] Built target cuda_stream
[  7%] Built target device_tracer
[  8%] Built target profiler
[  8%] Built target naive_best_fit_allocator
[  8%] Built target cuda_device_guard
[  8%] Built target cuda_allocator
[  8%] Built target aligned_allocator
[  8%] Built target best_fit_allocator
[  8%] Built target pinned_allocator
[  8%] Built target buffered_allocator
[  8%] Built target locked_allocator
[  8%] Built target cpu_allocator
[  8%] Built target thread_local_allocator
[  8%] Built target auto_growth_best_fit_allocator
[  8%] Built target retry_allocator
[  8%] Built target allocator_strategy
[  8%] Built target allocator_facade
[  8%] Built target malloc
[  8%] Built target cuda_resource_pool
[  8%] Built target cudnn_workspace_helper
[  8%] Built target cpu_helper
[  8%] Built target stream_callback_manager
[  8%] Built target device_context
[  8%] Built target memcpy
[  8%] Built target memory
[  8%] Built target ddim
[  8%] Built target data_type
[  8%] Built target tensor
[  8%] Built target selected_rows
[  8%] Built target version
[  8%] Built target lod_tensor
[  8%] Built target threadpool
[  9%] Built target var_type_traits
[  9%] Built target scope
[  9%] Built target reset_tensor_array
[  9%] Built target extern_cub
[  9%] Performing update step for 'extern_cryptopp'
-- extern_cryptopp update command succeeded.  See also /root/Paddle/build/third_party/cryptopp/src/extern_cryptopp-stamp/extern_cryptopp-update-*.log
[  9%] Performing configure step for 'extern_cryptopp'
-- extern_cryptopp configure command succeeded.  See also /root/Paddle/build/third_party/cryptopp/src/extern_cryptopp-stamp/extern_cryptopp-configure-*.log
[  9%] Performing build step for 'extern_cryptopp'
[ 98%] Built target cryptopp-object
[ 99%] Built target cryptopp-static
[100%] Built target cryptopp-shared
[  9%] Performing install step for 'extern_cryptopp'
-- extern_cryptopp install command succeeded.  See also /root/Paddle/build/third_party/cryptopp/src/extern_cryptopp-stamp/extern_cryptopp-install-*.log
[  9%] Completed 'extern_cryptopp'
[  9%] Built target extern_cryptopp
[  9%] Built target device_code
[  9%] Built target variable_helper
[  9%] Built target lodtensor_printer
[  9%] Built target timer
[  9%] Built target device_memory_aligment
[  9%] Built target collective_helper
[  9%] Built target denormal
[  9%] Performing update step for 'extern_warpctc'
-- extern_warpctc update command succeeded.  See also /root/Paddle/build/third_party/warpctc/src/extern_warpctc-stamp/extern_warpctc-update-*.log
[  9%] Performing configure step for 'extern_warpctc'
-- extern_warpctc configure command succeeded.  See also /root/Paddle/build/third_party/warpctc/src/extern_warpctc-stamp/extern_warpctc-configure-*.log
[  9%] Performing build step for 'extern_warpctc'
[100%] Built target warpctc
[  9%] Performing install step for 'extern_warpctc'
-- extern_warpctc install command succeeded.  See also /root/Paddle/build/third_party/warpctc/src/extern_warpctc-stamp/extern_warpctc-install-*.log
[  9%] Completed 'extern_warpctc'
[  9%] Built target extern_warpctc
[  9%] Built target dynload_warpctc
[  9%] Built target string_helper
[ 10%] Built target custom_tensor
[ 11%] Built target op_meta_info
[ 12%] Built target attribute
[ 12%] Built target garbage_collector
[ 12%] Built target no_need_buffer_vars_inference
[ 12%] Built target unused_var_check
[ 12%] Built target data_device_transform
[ 12%] Built target data_type_transform
[ 12%] Built target op_kernel_type
[ 12%] Built target blas
[ 12%] Built target math_function
[ 12%] Built target data_layout_transform
[ 12%] Built target data_transform
[ 12%] Built target op_proto_maker
[ 12%] Built target op_info
[ 12%] Built target shape_inference
[ 12%] Built target transfer_scope_cache
[ 12%] Built target op_call_stack
[ 12%] Built target nan_inf_utils
[ 12%] Built target operator
[ 12%] Built target proto_desc
[ 12%] Built target executor_gc_helper
[ 12%] Built target feed_fetch_method
[ 13%] Built target lod_rank_table
[ 13%] Built target op_version_proto
[ 13%] Built target op_registry
[ 13%] Built target custom_operator
[ 13%] Built target op_version_registry
[ 13%] Built target node
[ 13%] Built target pretty_log
[ 13%] Built target graph
[ 13%] Built target graph_traits
[ 14%] Built target graph_helper
[ 14%] Built target graph_pattern_detector
[ 14%] Built target pass
[ 14%] Built target fuse_pass_base
[ 14%] Built target graph_to_program_pass
[ 14%] Built target ps_gpu_wrapper
[ 14%] Built target box_wrapper
[ 14%] Built target heter_wrapper
[ 14%] Built target fleet_wrapper
[ 14%] Built target shell
[ 14%] Built target fs
[ 14%] Built target prepared_operator
[ 14%] Built target imperative_flag
[ 14%] Built target layer
[ 14%] Built target common_infer_shape_functions
[ 14%] Built target op_variant
[ 14%] Built target while_op_helper
[ 14%] Built target conditional_block_op_helper
[ 14%] Built target recurrent_op_helper
[ 15%] Built target conditional_block_op
[ 15%] Built target executor
[ 15%] Built target recurrent_op
[ 15%] Built target paddle_framework
[ 15%] Built target generator
[ 15%] Built target op_compatible_info
[ 15%] Built target naive_executor
[ 15%] Built target reader
[ 15%] Built target fuse_bn_add_act_pass
[ 15%] Built target placement_pass_base
[ 15%] Built target cudnn_placement_pass
[ 15%] Built target unsqueeze2_eltwise_fuse_pass
[ 15%] Built target fc_elementwise_layernorm_fuse_pass
[ 15%] Built target fuse_bn_act_pass
[ 15%] Built target simplify_with_basic_ops_pass
[ 16%] Built target skip_layernorm_fuse_pass
[ 16%] Built target delete_quant_dequant_op_pass
[ 16%] Built target shuffle_channel_detect_pass
[ 16%] Built target quant_conv2d_dequant_fuse_pass
[ 16%] Built target runtime_context_cache_pass
[ 16%] Built target map_matmul_to_mul_pass
[ 16%] Built target fc_fuse_pass
[ 16%] Built target identity_scale_op_clean_pass
[ 16%] Built target push_dense_op
[ 16%] Built target lock_free_optimize_pass
[ 16%] Built target adaptive_pool2d_convert_global_pass
[ 16%] Built target graph_viz_pass
[ 16%] Built target repeated_fc_relu_fuse_pass
[ 16%] Built target embedding_fc_lstm_fuse_pass
[ 16%] Built target embedding_eltwise_layernorm_fuse_pass
[ 16%] Built target seqpool_cvm_concat_fuse_pass
[ 16%] Built target conv_bn_fuse_pass
[ 16%] Built target subgraph_detector
[ 16%] Built target coalesce_grad_tensor_pass
[ 16%] Built target attention_lstm_fuse_pass
[ 16%] Built target conv_affine_channel_fuse_pass
[ 16%] Built target fc_lstm_fuse_pass
[ 16%] Built target conv_elementwise_add_fuse_pass
[ 16%] Built target fc_gru_fuse_pass
[ 16%] Built target fuse_relu_depthwise_conv_pass
[ 16%] Built target delete_quant_dequant_filter_op_pass
[ 16%] Built target transpose_flatten_concat_fuse_pass
[ 16%] Built target multihead_matmul_fuse_pass
[ 16%] Built target seq_concat_fc_fuse_pass
[ 16%] Built target multi_batch_merge_pass
[ 16%] Built target fuse_elewise_add_act_pass
[ 16%] Built target seqconv_eltadd_relu_fuse_pass
[ 16%] Built target squared_mat_sub_fuse_pass
[ 16%] Built target conv_elementwise_add_act_fuse_pass
[ 16%] Built target conv_elementwise_add2_act_fuse_pass
[ 16%] Built target pass_builder
[ 16%] Built target seqpool_concat_fuse_pass
[ 16%] Built target is_test_pass
[ 16%] Built target sync_batch_norm_pass
[ 16%] Built target fuse_optimizer_op_pass
[ 16%] Built target fuse_adam_op_pass
[ 17%] Built target fuse_sgd_op_pass
[ 17%] Built target fuse_momentum_op_pass
[ 17%] Built target computation_op_handle
[ 17%] Built target conditional_block_op_eager_deletion_pass
[ 17%] Built target var_handle
[ 17%] Built target reference_count_pass_helper
[ 17%] Built target while_op_eager_deletion_pass
[ 17%] Built target recurrent_op_eager_deletion_pass
[ 17%] Built target eager_deletion_op_handle
[ 17%] Built target eager_deletion_pass
[ 17%] Built target op_handle_base
[ 17%] Built target share_tensor_buffer_functor
[ 17%] Built target share_tensor_buffer_op_handle
[ 17%] Built target multi_devices_helper
[ 18%] Built target memory_reuse_pass
[ 18%] Built target buffer_shared_inplace_op_pass
[ 18%] Built target inplace_addto_op_pass
[ 18%] Built target op_graph_view
[ 18%] Built target reference_count_pass
[ 18%] Built target buffer_shared_cross_op_memory_reuse_pass
[ 18%] Built target add_reader_dependency_pass
[ 18%] Built target backward_optimizer_op_deps_pass
[ 18%] Built target variable_visitor
[ 18%] Built target all_reduce_op_handle
[ 18%] Built target all_reduce_deps_pass
[ 18%] Built target modify_op_lock_and_record_event_pass
[ 18%] Built target multi_devices_graph_print_pass
[ 19%] Built target multi_devices_graph_check_pass
[ 19%] Built target fused_all_reduce_op_handle
[ 19%] Built target grad_merge_all_reduce_op_handle
[ 19%] Built target fuse_all_reduce_op_pass
[ 19%] Built target sequential_execution_pass
[ 20%] Built target broadcast_op_handle
[ 20%] Built target selected_rows_functor
[ 20%] Built target reduce_op_handle
[ 20%] Built target scale_loss_grad_op_handle
[ 20%] Built target fused_broadcast_op_handle
[ 20%] Built target rpc_op_handle
[ 20%] Built target fetch_barrier_op_handle
[ 20%] Built target multi_devices_graph_pass
[ 20%] Built target set_reader_device_info_utils
[ 20%] Built target code_generator
[ 20%] Built target fusion_group_pass
[ 20%] Built target build_strategy
[ 20%] Built target ssa_graph_executor
[ 21%] Built target fetch_async_op_handle
[ 21%] Built target fast_threaded_ssa_graph_executor
[ 21%] Built target scope_buffered_monitor
[ 21%] Built target fetch_op_handle
[ 21%] Built target threaded_ssa_graph_executor
[ 21%] Built target parallel_ssa_graph_executor
[ 21%] Built target bind_threaded_ssa_graph_executor
[ 21%] Built target async_ssa_graph_executor
[ 21%] Built target scope_buffered_ssa_graph_executor
[ 21%] Built target paddle_crypto
[ 21%] Built target gradient_accumulator
[ 21%] Built target engine
[ 21%] Built target amp
[ 21%] Built target op_desc_meta
[ 21%] Built target program_desc_tracer
[ 21%] Built target activation_functions
[ 21%] Built target lstm_compute
[ 21%] Built target lstm_op
[ 21%] Built target unpool_op
[ 21%] Built target unbind_op
[ 21%] Built target truncated_gaussian_random_op
[ 21%] Built target tril_triu_op
[ 21%] Built target transpose_op
[ 21%] Built target trace_op
[ 22%] Built target top_k_v2_op
[ 22%] Built target top_k_op
[ 22%] Built target tile_op
[ 22%] Built target sync_batch_norm_op
[ 22%] Built target tensor_array_to_tensor_op
[ 22%] Built target tdm_child_op
[ 23%] Built target sum_op
[ 23%] Built target strided_slice_op
[ 23%] Built target squeeze_op
[ 23%] Built target squared_l2_distance_op
[ 23%] Built target spp_op
[ 23%] Built target split_op
[ 23%] Built target split_lod_tensor_op
[ 23%] Built target softmax_with_cross_entropy_op
[ 23%] Built target smooth_l1_loss_op
[ 23%] Built target size_op
[ 23%] Built target sign_op
[ 23%] Built target sigmoid_cross_entropy_with_logits_op
[ 23%] Built target shuffle_channel_op
[ 23%] Built target shrink_rnn_memory_op
[ 23%] Built target shape_op
[ 23%] Built target select_output_op
[ 23%] Built target segment_pool_op
[ 23%] Built target scatter_nd_add_op
[ 23%] Built target squared_l2_norm_op
[ 23%] Built target scale_op
[ 23%] Built target save_op
[ 23%] Built target save_combine_op
[ 23%] Built target sampling_id_op
[ 23%] Built target run_program_op
[ 23%] Built target roll_op
[ 23%] Built target roi_pool_op
[ 23%] Built target roi_align_op
[ 23%] Built target rnn_op
[ 23%] Built target rnn_memory_helper_op
[ 23%] Built target uniform_random_batch_size_like_op
[ 23%] Built target reverse_op
[ 23%] Built target reshape_op
[ 23%] Built target reorder_lod_tensor_by_rank_op
[ 23%] Built target selu_op
[ 23%] Built target range_op
[ 23%] Built target random_crop_op
[ 23%] Built target randint_op
[ 23%] Built target queue_generator_op
[ 23%] Built target quantize_op
[ 23%] Built target pull_sparse_v2_op
[ 24%] Built target pull_box_sparse_op
[ 24%] Built target pull_box_extended_sparse_op
[ 24%] Built target psroi_pool_op
[ 24%] Built target prroi_pool_op
[ 24%] Built target print_op
[ 24%] Built target prelu_op
[ 24%] Built target pool_with_index_op
[ 24%] Built target pool_op
[ 24%] Built target sequence_padding
[ 24%] Built target sequence_scale
[ 24%] Built target warpctc_op
[ 24%] Built target where_op
[ 24%] Built target pixel_shuffle_op
[ 24%] Built target partial_sum_op
[ 24%] Built target partial_concat_op
[ 24%] Built target pad_op
[ 24%] Built target pad_constant_like_op
[ 24%] Built target lod_rank_table_op
[ 24%] Built target dist_op
[ 24%] Built target unique_with_counts_op
[ 24%] Built target diag_embed_op
[ 24%] Built target unstack_op
[ 24%] Built target expand_as_v2_op
[ 24%] Built target assign_value_op
[ 24%] Built target norm_op
[ 24%] Built target enqueue_op
[ 24%] Built target dot_op
[ 24%] Built target dequeue_op
[ 24%] Built target cumsum_op
[ 24%] Built target spectral_norm_op
[ 24%] Built target deformable_conv_op
[ 24%] Built target huber_loss_op
[ 24%] Built target is_empty_op
[ 24%] Built target crop_tensor_op
[ 24%] Built target select_input_op
[ 24%] Built target gather_op
[ 24%] Built target expand_v2_op
[ 24%] Built target crf_decoding_op
[ 24%] Built target crop_op
[ 24%] Built target cos_sim_op
[ 24%] Built target bilinear_tensor_product_op
[ 24%] Built target tdm_sampler_op
[ 24%] Built target gaussian_random_batch_size_like_op
[ 24%] Built target dgc_clip_by_norm_op
[ 24%] Built target eye_op
[ 25%] Built target seed_op
[ 25%] Built target delete_var_op
[ 25%] Built target unsqueeze_op
[ 25%] Built target space_to_depth_op
[ 25%] Built target bce_loss_op
[ 25%] Built target diag_v2_op
[ 25%] Built target beam_search_op
[ 25%] Built target uniform_random_op
[ 25%] Built target assign_op
[ 25%] Built target arg_min_op
[ 25%] Built target multinomial_op
[ 25%] Built target allclose_op
[ 25%] Built target detection_map_op
[ 25%] Built target add_position_encoding_op
[ 25%] Built target conv_op
[ 25%] Built target chunk_eval_op
[ 25%] Built target dequantize_log_op
[ 25%] Built target hierarchical_sigmoid_op
[ 25%] Built target increment_op
[ 25%] Built target array_to_lod_tensor_op
[ 25%] Built target affine_grid_op
[ 25%] Built target where_index_op
[ 25%] Built target fake_quantize_op
[ 25%] Built target flip_op
[ 25%] Built target lstm_unit_op
[ 25%] Built target average_accumulates_op
[ 26%] Built target gaussian_random_op
[ 26%] Built target assert_op
[ 26%] Built target fake_dequantize_op
[ 26%] Built target arg_max_op
[ 27%] Built target split_selected_rows_op
[ 27%] Built target ascend_trigger_op
[ 27%] Built target batch_fc_op
[ 27%] Built target affine_channel_op
[ 27%] Built target expand_as_op
[ 27%] Built target cross_op
[ 27%] Built target pull_sparse_op
[ 28%] Built target nce_op
[ 28%] Built target softmax_op
[ 28%] Built target concat_op
[ 28%] Built target cvm_op
[ 28%] Built target lookup_table_op
[ 28%] Built target similarity_focus_op
[ 28%] Building CUDA object paddle/fluid/operators/CMakeFiles/activation_op.dir/activation_op.cu.o
Segmentation fault (core dumped)
paddle/fluid/operators/CMakeFiles/activation_op.dir/build.make:110: recipe for target 'paddle/fluid/operators/CMakeFiles/activation_op.dir/activation_op.cu.o' failed
make[2]:***[paddle/fluid/operators/CMakeFiles/activation_op.dir/activation_op.cu.o] Error 139
CMakeFiles/Makefile2:47616: recipe for target 'paddle/fluid/operators/CMakeFiles/activation_op.dir/all' failed
make[1]:***[paddle/fluid/operators/CMakeFiles/activation_op.dir/all] Error 2
Makefile:129: recipe for target 'all' failed
make:***[all] Error 2
mfpqipee

mfpqipee6#

cmake指令你改一下: cmake .. -DWITH_CONTRIB=OFF -DWITH_MKL=OFF -DWITH_MKLDNN=OFF -DWITH_TESTING=OFF -DCMAKE_BUILD_TYPE=Release -DON_INFER=ON -DWITH_PYTHON=ON -DPY_VERSION=3.6 -DWITH_XBYAK=OFF -DWITH_NV_JETSON=ON -DWITH_TENSORRT=ON -DTENSORRT_ROOT=/usr -DCMAKE_CUDA_COMPILER=/usr/local/cuda-10.0/bin/nvcc -DWITH_NCCL=OFF -DCUDA_ARCH_NAME=Auto 如果是 jetpack4.4或4.5系统的 -DCMAKE_CUDA_COMPILER 中的 cuda-10.0改为10.2

xghobddn

xghobddn7#

make[2]:***No rule to make target 'paddle/fluid/framework/ir/CMakeFiles/graph_to_program_pass.dir/depend'.  Stop.
CMakeFiles/Makefile2:11305: recipe for target 'paddle/fluid/framework/ir/CMakeFiles/graph_to_program_pass.dir/all' failed
make[1]:***[paddle/fluid/framework/ir/CMakeFiles/graph_to_program_pass.dir/all] Error 2
Makefile:129: recipe for target 'all' failed
make:***[all] Error 2

按照您提到的cmake 发现这个错误
graph_to_program_pass.dir 中 depend.make没有内容是这样的


# Empty dependencies file for graph_to_program_pass.

# This may be replaced when dependencies are built.
cbjzeqam

cbjzeqam8#

@Wall-ee 请问您是develop分支吗?你切换到 v2.0.1分支

l5tcr1uw

l5tcr1uw9#

现在用的是2.0.0我切到 2.0.1试一下

beq87vna

beq87vna10#

好像没找到2.0.1 只有release 2.0?

a9wyjsp7

a9wyjsp711#

你换到tag,或者输入git checkout v2.0.1

6ss1mwsb

6ss1mwsb12#

成功了!!整整3周,哈哈,终于搞定了,几个细节。
1,配置的时候,如果已经编译了nccl 一定要 nccl 要开成on
2,把里面的所有的关于最近github的clone 都替换为可用的下载
3,关于编译文件中,补丁的下载地址也要处理一下

ckocjqey

ckocjqey13#

Are you satisfied with the resolution of your issue?

YES
No

qyuhtwio

qyuhtwio14#

但是编译后 的python 安装文件又出问题了 出现这个:
报告zipfile 问题

Processing ./paddlepaddle_gpu-0.0.0-cp36-cp36m-linux_aarch64.whl
Requirement already satisfied: requests>=2.20.0 in /usr/local/lib/python3.6/dist-packages (from paddlepaddle-gpu==0.0.0) (2.25.1)
Requirement already satisfied: Pillow in /usr/local/lib/python3.6/dist-packages (from paddlepaddle-gpu==0.0.0) (8.1.2)
Requirement already satisfied: gast>=0.3.3 in /usr/local/lib/python3.6/dist-packages (from paddlepaddle-gpu==0.0.0) (0.4.0)
Requirement already satisfied: protobuf>=3.1.0 in /usr/local/lib/python3.6/dist-packages (from paddlepaddle-gpu==0.0.0) (3.15.6)
Requirement already satisfied: numpy>=1.13 in /usr/lib/python3/dist-packages (from paddlepaddle-gpu==0.0.0) (1.13.3)
Requirement already satisfied: astor in /usr/local/lib/python3.6/dist-packages (from paddlepaddle-gpu==0.0.0) (0.8.1)
Requirement already satisfied: six in /usr/lib/python3/dist-packages (from paddlepaddle-gpu==0.0.0) (1.11.0)
Requirement already satisfied: decorator in /usr/lib/python3/dist-packages (from paddlepaddle-gpu==0.0.0) (4.1.2)
Requirement already satisfied: certifi>=2017.4.17 in /usr/lib/python3/dist-packages (from requests>=2.20.0->paddlepaddle-gpu==0.0.0) (2018.1.18)
Requirement already satisfied: idna<3,>=2.5 in /usr/lib/python3/dist-packages (from requests>=2.20.0->paddlepaddle-gpu==0.0.0) (2.6)
Requirement already satisfied: urllib3<1.27,>=1.21.1 in /usr/lib/python3/dist-packages (from requests>=2.20.0->paddlepaddle-gpu==0.0.0) (1.22)
Requirement already satisfied: chardet<5,>=3.0.2 in /usr/lib/python3/dist-packages (from requests>=2.20.0->paddlepaddle-gpu==0.0.0) (3.0.4)
Installing collected packages: paddlepaddle-gpu
ERROR: Exception:
Traceback (most recent call last):
  File "/usr/local/lib/python3.6/dist-packages/pip/_internal/cli/base_command.py", line 189, in _main
    status = self.run(options, args)
  File "/usr/local/lib/python3.6/dist-packages/pip/_internal/cli/req_command.py", line 178, in wrapper
    return func(self, options, args)
  File "/usr/local/lib/python3.6/dist-packages/pip/_internal/commands/install.py", line 400, in run
    pycompile=options.compile,
  File "/usr/local/lib/python3.6/dist-packages/pip/_internal/req/__init__.py", line 88, in install_given_reqs
    pycompile=pycompile,
  File "/usr/local/lib/python3.6/dist-packages/pip/_internal/req/req_install.py", line 796, in install
    requested=self.user_supplied,
  File "/usr/local/lib/python3.6/dist-packages/pip/_internal/operations/install/wheel.py", line 827, in install_wheel
    requested=requested,
  File "/usr/local/lib/python3.6/dist-packages/pip/_internal/operations/install/wheel.py", line 662, in _install_wheel
    file.save()
  File "/usr/local/lib/python3.6/dist-packages/pip/_internal/operations/install/wheel.py", line 429, in save
    shutil.copyfileobj(f, dest)
  File "/usr/lib/python3.6/shutil.py", line 79, in copyfileobj
    buf = fsrc.read(length)
  File "/usr/lib/python3.6/zipfile.py", line 872, in read
    data = self._read1(n)
  File "/usr/lib/python3.6/zipfile.py", line 948, in _read1
    data = self._decompressor.decompress(data, n)
zlib.error: Error -3 while decompressing data: invalid code lengths set

相关问题