Paddle max op do not support compute at float16

jutyujz0  于 5个月前  发布在  其他
关注(0)|答案(4)|浏览(68)

请提出你的问题 Please ask your question

When I am using paddle.amp.auto_cast to compute the data at float16, it shows some errors.

Errors below:

it seems that PaddlePaddle currently does not support compute tensor at float16 for paddle.max op, and I also have checked the doc of paddle.max op and have searched issues but can not get any information about it.

Have any other ways to let paddle.max op support computation at float16?

The environment I'm using:

Is debug build: False
Paddle Version: 2.4.2
CUDA used to build Paddle: 11.2

OS: Ubuntu 20.04.5 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.31

Python version: 3.8.16 (default, Mar  2 2023, 03:21:46)  [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.4.0-136-generic-x86_64-with-glibc2.17
Is CUDA available: True
CUDA runtime version: 11.6.124
CUDA_MODULE_LOADING set to: LAZY

Nvidia driver version: 495.29.05
cuDNN version: 8.5.0.96
Is XNNPACK available: True
pw9qyyiw

pw9qyyiw2#

I think that paddle should make max op to support computation at float16.

x3naxklr

x3naxklr3#

已经在计划中了,预计很快就会支持

uplii1fm

uplii1fm4#

已经在计划中了,预计很快就会支持

好的

相关问题