Paddle Change args of optimizer in training

dly7yett  于 2021-11-29  发布在  Java
关注(0)|答案(5)|浏览(283)
forward_program = fluid.Program()
loss = None
with fluid.program_guad(main_program=forward_program):
    input = fluid.layers.data(name="input")
    label = fluid.layers.data(name="label")
    out = fluid.layers.fc(input, size=10)
    loss = fluid.layers.cross_entropy(out, label)

for epoch in range(10):
    if epoch > 5:
        lr = 0.01
    train_program = forward_program.clone()
    with fluid.program_guad(main_program=train_program):
        loss = train_program.global_block().get_var(loss.name)
        optimizer = fluid.layers.Adam(learning_rate=lr)
        optimizer.minimize(loss)

    exe.run(train_program)
nzkunb0c

nzkunb0c1#

我在训练一个epoch后添加代码:
    if pass_id in [2,4,6]:
        rate = math.pow(0.1, pass_id/2)
        print('rate:',rate)
        train_program = train_prog.clone()
        with fluid.program_guard(main_program=train_program):
            loss = train_program.global_block().var(train_cost.name)
            optimizer = fluid.optimizer.Adam(learning_rate=params["lr"]*rate)
            optimizer.minimize(loss)
        train_prog = train_program

其中train_prog 是前面定义的program,打印fetch出来的lr没有变化

swvgeqrz

swvgeqrz2#

你怎么fetch的lr?

ua4mk5z4

ua4mk5z43#

通过以下方式更新下scope中lr variable对应的data试试?

lr_tensor = fluid.global_scope().find_var(lr_name).get_tensor()
lr_tensor.set(params["lr"]*rate, place)
s1ag04yj

s1ag04yj4#

fetch方式见最新image_classfication里面的train.py:
global_lr = optimizer._global_learning_rate()
global_lr.persistable=True
lr_name=optimizer._global_learning_rate().name
build_program_out.append(global_lr)

我在此获得lr_name全局变量,
修改代码后fetch出来的lr依旧不变
@wanghaoshuang

cygmwpex

cygmwpex5#

如果只修改lr的话,我issue里说的方法就比较trick。

更好的办法是参考这个链接,实现lr scheduler
https://github.com/PaddlePaddle/Paddle/blob/develop/python/paddle/fluid/layers/learning_rate_scheduler.py

相关问题