如何在Keras的训练会话中保留度量值?

vom3gejh  于 12个月前  发布在  其他
关注(0)|答案(2)|浏览(157)

我有一个fit()函数,它使用ModelCheckpoint()回调来保存模型,如果它比以前的任何模型都好,使用保存_weights_only=False,所以它保存了整个模型。这应该允许我在以后使用load_model()恢复训练。
不幸的是,在保存()/load_model()往返过程中的某个地方,度量 values 没有被保留--例如,val_loss被设置为inf。这意味着当训练恢复时,在第一个epoch之后,ModelCheckpoint()将始终保存模型,这几乎总是比早期会话中的前一个冠军更差。
我已经确定可以在恢复训练之前设置ModelCheckpoint()的当前最佳值,如下所示:

myCheckpoint = ModelCheckpoint(...)
myCheckpoint.best = bestValueSoFar

字符串
显然,我可以监控我需要的值,将它们写到一个文件中,然后在恢复时再次读取它们,但考虑到我是Keras新手,我想知道我是否错过了一些明显的东西。

9gm1akwq

9gm1akwq1#

最后,我快速编写了自己的回调函数来跟踪最佳训练值,这样我就可以重新加载它们。它看起来像这样:

# State monitor callback. Tracks how well we are doing and writes
# some state to a json file. This lets us resume training seamlessly.
#
# ModelState.state is:
#
# { "epoch_count": nnnn,
#   "best_values": { dictionary with keys for each log value },
#   "best_epoch": { dictionary with keys for each log value }
# }

class ModelState(callbacks.Callback):

    def __init__(self, state_path):

        self.state_path = state_path

        if os.path.isfile(state_path):
            print('Loading existing .json state')
            with open(state_path, 'r') as f:
                self.state = json.load(f)
        else:
            self.state = { 'epoch_count': 0,
                           'best_values': {},
                           'best_epoch': {}
                         }

    def on_train_begin(self, logs={}):

        print('Training commences...')

    def on_epoch_end(self, batch, logs={}):

        # Currently, for everything we track, lower is better

        for k in logs:
            if k not in self.state['best_values'] or logs[k] < self.state['best_values'][k]:
                self.state['best_values'][k] = float(logs[k])
                self.state['best_epoch'][k] = self.state['epoch_count']

        with open(self.state_path, 'w') as f:
            json.dump(self.state, f, indent=4)
        print('Completed epoch', self.state['epoch_count'])

        self.state['epoch_count'] += 1

字符串
然后,在fit()函数中,类似这样:

# Set up the model state, reading in prior results info if available

model_state = ModelState(path_to_state_file)

# Checkpoint the model if we get a better result

model_checkpoint = callbacks.ModelCheckpoint(path_to_model_file,
                                             monitor='val_loss',
                                             save_best_only=True,
                                             verbose=1,
                                             mode='min',
                                             save_weights_only=False)

# If we have trained previously, set up the model checkpoint so it won't save
# until it finds something better. Otherwise, it would always save the results
# of the first epoch.

if 'best_values' in model_state.state:
    model_checkpoint.best = model_state.state['best_values']['val_loss']

callback_list = [model_checkpoint,
                model_state]

# Offset epoch counts if we are resuming training. If you don't do
# this, only epochs-initial_epochs epochs will be done.

initial_epoch = model_state.state['epoch_count']
epochs += initial_epoch

# .fit() or .fit_generator, etc. goes here.

2guxujil

2guxujil2#

我不认为,你必须自己存储指标值.有一个feature-requestkeras项目非常类似的东西,但它已经关闭.也许你可以尝试使用已经提出的解决方案在那里.在keras的哲学是不是很有用存储指标,因为你只是保存了model,这意味着:每一层的架构和权重;而不是历史或其他任何东西。
最简单的方法是创建一种metafile,其中包含模型的度量值和模型本身的名称。然后您可以加载metafile,获取最佳度量值并获取产生它们的模型的名称,再次加载模型,继续训练。

相关问题