在Keras中应用循环学习率

2hh7jdfx  于 2023-03-23  发布在  其他
关注(0)|答案(1)|浏览(146)

我想在Keras中测试循环学习率。然而,就在第一个epoch结束时,我得到了这个TypeError错误。在这种情况下,如何正确定义循环学习率?
历元1/10000历史= www.example.com发生器(发生器=培训发生器,确认数据=校准发生器,历元= n历元,model.fit_generator(generator=training_generator,validation_data = calibration_generator, epochs = n_epoch,
510/510【==================================================================================================================================================================================================================================================0s-损失:0.0111-mae:0.0772 Traceback(最近的调用):
文件"H:\我的数据\来自驱动器C我的PC Spet_2022\Code\Phase_2_Cyclic_Learning.py",第303行,位于
hist = www.example.com_generator(generator = training_generator,validation_data calibration_generator,epochs = n_epoch,model.fit_generator(generator=training_generator,validation_data calibration_generator, epochs = n_epoch,
文件"C:\Users\bluesky\AppData\Roaming\Python\Python38\site-packages\keras\engine\training.py",第2507行,在fit_generator返回www.example.com(self.fit(
文件"C:\Users\bluesky\AppData\Roaming\Python\Python38\site-packages\keras\utils\traceback_utils. py",第70行,在错误处理程序中,将e. with_traceback(filtered_tb)从"无"提升
文件"C:\Users\bluesky\AppData\Roaming\Python\Python38\site-packages\keras\utils\generic_utils. py",第965行,在update self._values [k]=[v * value_base,value_base]中
TypeError:不支持 * 的操作数类型:'CyclicalLearningRate'和'int'
我的代码:

pip install tensorflow-addons
from tensorflow_addons.optimizers import CyclicalLearningRate

model = Sequential()

input_dim = len(input_cols_idx)
batch_size = batch_size_gen

model.add(Dense(units=hidden_layer1_neurons, input_dim= input_dim,
        kernel_initializer = initializers.RandomNormal(mean=0.0, stddev=0.05)))

model.add(Activation(activation_1)) 
model.add(Dense(units=1))
model.add(Activation(activation_2)) 


# Create Cyclic Learning Rate and Callbacks

cyclical_learning_rate = CyclicalLearningRate(
initial_learning_rate=0.000008,
maximal_learning_rate=0.001,
step_size=2360,
scale_fn=lambda x: 1 / (2.0 ** (x - 1)),
scale_mode='cycle')

model.compile(loss='mean_squared_error', 
 optimizer=Adam(learning_rate=cyclical_learning_rate), metrics=['mae'])

checkpointer = callbacks.ModelCheckpoint(filepath=("%s/%s.h5" % (model_dir,model_name)), 
monitor='val_loss', mode='min', save_best_only=True) 

# val_loss                                  
reduce = callbacks.ReduceLROnPlateau(monitor='val_mae',factor=0.25, mode='min',
                             patience = lr_reduce_patience,  epsilon=epsilon,
                             cooldown=0,  min_lr= min_lr)    # Later versions of Keras 
uses min_delta inplace of epsilon
earlystop = callbacks.EarlyStopping(monitor='val_mae', min_delta = earlystop_min_delta, 
                            patience=earlystop_patience, verbose=0, mode='min')

csv_logger = CSVLogger('%s/%s_log.csv'%(model_dir, model_name), append=True, 
  separator=';')

----------培训

hist = model.fit_generator(generator=training_generator,validation_data = 
  calibration_generator, epochs = n_epoch,
                use_multiprocessing=False, callbacks = [reduce, earlystop,checkpointer, 
csv_logger]) #TensorBoard(log_dir='./logs')])
               # workers=6)    ,plot_losses        
cost=pd.DataFrame(hist.history)
qojgxg4l

qojgxg4l1#

我想我找到了解决问题的办法(在我自己与之斗争之后)。
当你删除ReduceLROnPlateau回调函数时,它对我有效(tf=2.11.0和tfa=0.19.0)。

相关问题