keras 使用VGG16 MNIST数字进行迁移学习

z4bn682m  于 12个月前  发布在  其他
关注(0)|答案(1)|浏览(171)

我正在尝试对MNIST数字执行迁移学习。我对获得logits并将其用于基于梯度的攻击感兴趣。但由于某种原因,即使我的计算机是启用GPU的Apple m2max计算机,内核也一直在死亡。我还尝试了使用GPU的colab解决同样的问题。数据集不太学习,我正在重用imagenet权重。我如何解决这个问题?

class VGG16TransferLearning(tf.keras.Model):
  def __init__(self, base_model, models):
    super(VGG16TransferLearning, self).__init__()
    #base model
    self.base_model = base_model

   # other layers
   self.flatten = tf.keras.layers.Flatten()
   self.dense1 = tf.keras.layers.Dense(512, activation='relu')
   self.dense2 = tf.keras.layers.Dense(512, activation='relu')
   self.dense3 = tf.keras.layers.Dense(10)
   self.layers_list = [self.flatten, self.dense1, self.dense2, self.dense3]
  
  #instantiate the base model with other layers
  self.model = models.Sequential(
    [self.base_model, *self.layers_list]
   )

def call(self, *args, **kwargs):
  activation_list = []
  out = args[0]
  
  for layer in self.model.layers:
    out = layer(out)
    activation_list.append(out)
  if kwargs['training']:
   return out
  else:
   prob = tf.nn.softmax(out)
   return out, prob

字符串
下面是上面类的示例化:

base_model = VGG16(weights="imagenet", include_top=False, input_shape=x_train[0].shape)


base_model.trainable = False
我的输入形状是(75,75,3)
下面是编译和fit方法

from tensorflow.keras import layers, models


model = VGG16TransferLearning(base_model,models)

model.compile(loss=tf.keras.losses.SparseCategoricalCrossentropy(),
          optimizer=tf.keras.optimizers.legacy.Adam(),
          metrics=['accuracy'])

model.fit(x_train, y_train, epochs=10, validation_data=(x_test, y_test))


这是我每次调用fit方法时得到的错误:

Kernel Restarting
The kernel for Untitled.ipynb appears to have died. It will restart automatically

twh00eeo

twh00eeo1#

错误是从我的计算机的配置.我猜,tensorflow没有看到我的mac的GPU,即使列表物理设备评估为1.但它现在已经解决,一切正常.

相关问题