forward_program = fluid.Program()
loss = None
with fluid.program_guad(main_program=forward_program):
input = fluid.layers.data(name="input")
label = fluid.layers.data(name="label")
out = fluid.layers.fc(input, size=10)
loss = fluid.layers.cross_entropy(out, label)
for epoch in range(10):
if epoch > 5:
lr = 0.01
train_program = forward_program.clone()
with fluid.program_guad(main_program=train_program):
loss = train_program.global_block().get_var(loss.name)
optimizer = fluid.layers.Adam(learning_rate=lr)
optimizer.minimize(loss)
exe.run(train_program)
5条答案
按热度按时间nzkunb0c1#
其中train_prog 是前面定义的program,打印fetch出来的lr没有变化
swvgeqrz2#
你怎么fetch的lr?
ua4mk5z43#
通过以下方式更新下scope中lr variable对应的data试试?
s1ag04yj4#
fetch方式见最新image_classfication里面的train.py:
global_lr = optimizer._global_learning_rate()
global_lr.persistable=True
lr_name=optimizer._global_learning_rate().name
build_program_out.append(global_lr)
我在此获得lr_name全局变量,
修改代码后fetch出来的lr依旧不变
@wanghaoshuang
cygmwpex5#
如果只修改lr的话,我issue里说的方法就比较trick。
更好的办法是参考这个链接,实现lr scheduler
https://github.com/PaddlePaddle/Paddle/blob/develop/python/paddle/fluid/layers/learning_rate_scheduler.py