Paddle 版本 1.8.1
程序:
import paddle.fluid as fluid
import numpy as np
with fluid.dygraph.guard():
value = np.arange(100).reshape(10,2, 5).astype("float32")
out1 = np.zeros((2,3),dtype=np.float32)
out=fluid.dygraph.to_variable(out1)
a = fluid.dygraph.to_variable(value)
a.stop_gradient=False
linear=[]
for i in range(10):
linear.append(fluid.dygraph.Linear(5, 3))
for i in range(10):
out = out+linear[i](a[i,:,:])
print(out)
dx=fluid.dygraph.grad(outputs=out,inputs=a,create_graph=True,retain_graph=True, \
only_inputs=True,allow_unused=False)[0]
print(dx.numpy())
dx.backward()
print(a.gradient())
报错信息
Traceback (most recent call last):
File "31.py", line 17, in
only_inputs=True,allow_unused=False)[0]
File "", line 2, in grad
File "/share/group-soft/anaconda/install/envs/paddle/lib/python3.7/site-packages/paddle/fluid/wrapped_decorator.py", line 25, inimpl
return wrapped_func(args,kwargs)
File "/share/group-soft/anaconda/install/envs/paddle/lib/python3.7/site-packages/paddle/fluid/framework.py", line 216, inimpl*
return func(*args,**kwargs)
File "/share/group-soft/anaconda/install/envs/paddle/lib/python3.7/site-packages/paddle/fluid/dygraph/base.py", line 559, in grad
create_graph, retain_graph, allow_unused, only_inputs)
paddle.fluid.core_avx.EnforceNotMet:
C++ Call Stacks (More useful to developers):
0 std::string paddle::platform::GetTraceBackString<std::string const&>(std::string const&, char const*, int)
1 paddle::platform::EnforceNotMet::EnforceNotMet(std::string const&, char const*, int)
2 paddle::imperative::ReadyGradVarInfoMap::GetTarget(paddle::imperative::VariableWrapper const*) const
3 paddle::imperative::PartialGradTask::CreateResult()
4 paddle::imperative::PartialGradTask::Run()
5 paddle::imperative::PartialGradEngine::Execute()
Error Message Summary:
PermissionDeniedError: Target var generated_var_1@GRAD should not be nullptr
[Hint: iter->second should not be null.] at (/paddle/paddle/fluid/imperative/partial_grad_engine.cc:501)
我发现去掉python的控制流(for 循环),就可以跑通,可以得到正确的dx, 但是a.gradient()返回的是None, 就是请教一下,grad这个api使用时,组网过程不能有python的控制流吗?还有为什么直接手动append的组网返回的 a.gradient() 依然是None?
16条答案
按热度按时间koaltpgm1#
目前定位到是梯度聚合的时候,出现了一些bug,我们看下如何修复
2nbm6dog2#
好的,谢谢
owfi6suc3#
#25781 问题会在这个pr中 进行修复
csbfibhn4#
打扰问一下,动态图的CPU并行会加上吗?我看2.0-alpha版本介绍,2.0版本以后会以动态图为主
nbysray55#
这个是指多个线程训练,做参数同步吗
jljoyd4f6#
是的
xqk2d5yq7#
你好,我看说bug修复了,我是怎么弄呢?更新下吗?
hs1rzwqc8#
你们调试好bug后,我这边是如何操作才能用呢,是重装1.8.1还是安装最新的develop版本?
nhjlsmyf9#
你好,我是上次在issue 里提问二次求导(#25703),你们把问题合并了,我看说是已经修复了这个bug, 但是你们修复完之后呢?我们怎么才能用上你们修复之后的版本呢?是要直接等版本更新还是有什么临时的版本可以用?总得给个简单的说法,不能一直就一直不回复吧?我在问题下面问了两天,过好几天你们也不回复,只好给你们发邮件。如有打扰请见谅。 ylzustc@mail.ustc.edu.cn 发件人: hong 发送时间: 2020-07-29 15:13 收件人: PaddlePaddle/Paddle 抄送: zyl12138; Author 主题: Re: [PaddlePaddle/Paddle] 动态图应用fluid.dygraph.grad 报错 (#25703) #25781 问题会在这个pr中 进行修复 打扰问一下,动态图的CPU并行会加上吗?我看2.0-alpha版本介绍,2.0版本以后会以动态图为主 这个是指多个线程训练,做参数同步吗 — You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub, or unsubscribe.
yizd12fk10#
预计本周会发布1.8.4版本修复这个问题
zzzyeukh11#
谢谢 ylzustc@mail.ustc.edu.cn 发件人: Divano 发送时间: 2020-08-10 20:54 收件人: PaddlePaddle/Paddle 抄送: zyl12138; Author 主题: Re: [PaddlePaddle/Paddle] 动态图应用fluid.dygraph.grad 报错 (#25703) 你好,我是上次在issue 里提问二次求导(#25703),你们把问题合并了,我看说是已经修复了这个bug, 但是你们修复完之后呢?我们怎么才能用上你们修复之后的版本呢?是要直接等版本更新还是有什么临时的版本可以用?总得给个简单的说法,不能一直就一直不回复吧?我在问题下面问了两天,过好几天你们也不回复,只好给你们发邮件。如有打扰请见谅。 ylzustc@mail.ustc.edu.cn 发件人: hong 发送时间: 2020-07-29 15:13 收件人: PaddlePaddle/Paddle 抄送: zyl12138; Author 主题: Re: [PaddlePaddle/Paddle] 动态图应用fluid.dygraph.grad 报错 (#25703) #25781 问题会在这个pr中 进行修复 打扰问一下,动态图的CPU并行会加上吗?我看2.0-alpha版本介绍,2.0版本以后会以动态图为主 这个是指多个线程训练,做参数同步吗 — You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub, or unsubscribe. 预计本周会发布1.8.4版本修复这个问题 — You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub, or unsubscribe.
j5fpnvbx12#
你好,我安装了1.8.4版本,报错直接显示这个op不支持二阶导数了?
测试程序还是原始的测试程序,新的报错信息
Traceback (most recent call last):
File "36.py", line 18, in
only_inputs=True,allow_unused=False)[0]
File "</share/group-soft/anaconda/install/lib/python3.7/site-packages/decorator.py:decorator-gen-29>", line 2, in grad
File "/share/group-soft/anaconda/install/lib/python3.7/site-packages/paddle/fluid/wrapped_decorator.py", line 25, inimpl
return wrapped_func(args,kwargs)
File "/share/group-soft/anaconda/install/lib/python3.7/site-packages/paddle/fluid/framework.py", line 216, inimpl*
return func(*args,**kwargs)
File "/share/group-soft/anaconda/install/lib/python3.7/site-packages/paddle/fluid/dygraph/base.py", line 563, in grad
create_graph, retain_graph, allow_unused, only_inputs)
paddle.fluid.core_avx.EnforceNotMet:
C++ Call Stacks (More useful to developers):
0 std::string paddle::platform::GetTraceBackString<std::string const&>(std::string const&, char const*, int)
1 paddle::platform::EnforceNotMet::EnforceNotMet(std::string const&, char const*, int)
2 paddle::imperative::PartialGradTask::RunEachOp(paddle::imperative::OpBase*)
3 paddle::imperative::PartialGradTask::Run()
4 paddle::imperative::PartialGradEngine::Execute()
Error Message Summary:
NotFoundError: The Op matmul_grad doesn't have any grad op. If you don't intend calculating higher order derivatives, please set
create_graph
to False.[Hint: double_grad_node should not be null.] at (/paddle/paddle/fluid/imperative/partial_grad_engine.cc:894)
oymdgrw713#
create_graph = False, retain_graph = False
vx6bjr1n14#
这样不就是不能求二阶导了吗?
iqjalb3h15#
paddle 2.0
依然有这个问题,用1.8.4就可以了吗??