Keras模型预测给出了相反的结果

mlnl4t2r  于 2022-11-24  发布在  其他
关注(0)|答案(1)|浏览(194)

bounty将在5天后过期。回答此问题可获得+50声望奖励。user1260391正在寻找标准答案

我在Keras中训练了一个名为model_2的模型,并使用model.predict进行了预测,但我注意到,当我重新运行代码时,结果完全不同。但是下一次它的概率值都接近于0。这与内存或我在其他帖子中看到的stateful参数有关吗?

X = df.iloc[:,1:10161]
X = X.to_numpy()                      
X = X.reshape([X.shape[0], X.shape[1],1]) 
X_train_1 = X[:,0:10080,:]
X_train_2 = X[:,10080:10160,:].reshape(17,80)

inputs_1 = keras.Input(shape=(10080, 1))
layer1 = Conv1D(64, 14)(inputs_1)
layer2 = layers.MaxPool1D(5)(layer1)
layer3 = Conv1D(64, 14)(layer2)       
layer4 = layers.GlobalMaxPooling1D()(layer3)
layer5 = layers.Dropout(0.2)(layer4)

inputs_2 = keras.Input(shape=(80,))
layer6 = layers.concatenate([layer5, inputs_2])
layer7 = Dense(128, activation='relu')(layer6)
layer8 = layers.Dropout(0.5)(layer7)
layer9 = Dense(2, activation='softmax')(layer8)

model_2 = keras.models.Model(inputs = [inputs_1, inputs_2], outputs = [layer9])

adam = keras.optimizers.Adam(lr = 0.0001)
model_2.compile(loss='categorical_crossentropy', optimizer=adam, metrics=['acc'])

prediction = pd.DataFrame(model_2.predict([X_train_1,X_train_2]),index = df.iloc[:,0])  
pred = np.argmax(model_2.predict([X_train_1,X_train_2]), axis=1) 
display(prediction, pred)

矛盾结果的示例:
试验1:

0               1
id      
11  1.131853e-07    1.000000
22  1.003963e-06    0.999999
33  1.226156e-07    1.000000
44  9.985497e-08    1.000000
55  1.234705e-07    1.000000
66  1.189311e-07    1.000000
77  6.631822e-08    1.000000
88  9.586067e-08    1.000000
99  9.494666e-08    1.000000

试验2:

0           1
id      
11  0.183640    0.816360
22  0.487814    0.512187
33  0.151600    0.848400
44  0.135977    0.864023
55  0.120982    0.879018
66  0.171371    0.828629
77  0.199774    0.800226
88  0.133711    0.866289
99  0.125785    0.874215

试验3:

0           1
id      
11  0.900128    0.099872
22  0.573520    0.426480
33  0.948409    0.051591
44  0.955184    0.044816
55  0.959075    0.040925
66  0.945758    0.054242
77  0.956582    0.043418
88  0.954180    0.045820
99  0.964601    0.035399

试验4:

0   1
id      
11  1.0 4.697790e-08
22  1.0 2.018885e-07
33  1.0 2.911827e-08
44  1.0 2.904826e-08
55  1.0 1.368165e-08
66  1.0 2.742492e-08
77  1.0 1.461449e-08
88  1.0 2.302636e-08
99  1.0 2.099636e-08

使用以下工具对模型进行培训:

n_folds = 10
skf = StratifiedKFold(n_splits=n_folds, shuffle=True)
skf = skf.split(X_train_1, Y_cat)

cv_score = []

for i, (train, test) in enumerate(skf):

    model_2 = my_model()

    history = model_2.fit([X_train_1[train], X_train_2[train]], Y[train], validation_data=([X_train_1[test], X_train_2[test]], Y[test]), epochs=120, batch_size=10) 
    
    result = model_2.evaluate([X_train_1[test], X_train_2[test]], Y[test])
    keras.backend.clear_session()
htrmnn0y

htrmnn0y1#

这是完全正常的,当你创建一个新的模型时,它的权重被初始化为随机的,所以每次你运行这段代码时,预测都会改变。

相关问题