keras Tensorflow训练-使用repeat()时数据集为空?

mcdcgff0  于 2023-01-21  发布在  其他
关注(0)|答案(1)|浏览(166)

我正在使用Tensorflow 1.1.4,并尝试使用验证/测试拆分进行训练。
我使用tf.data.experimental.make_csv_dataset构建数据集
我的初始csv集:

def get_dataset(file_path,  BATCH_SIZE, NUM_EPOCHS, COLUMN_NAMES, **kwargs,):  
   dataset = tf.data.experimental.make_csv_dataset( file_path,batch_size=BATCH_SIZE,na_value="?",num_epochs=NUM_EPOCHS, column_names=COLUMN_NAMES, ignore_errors=True, shuffle=True,* *kwargs)
   return dataset

然后,我将一些函数Map到数据集,以按照模型的需要格式化数据:

csv_dataset = get_dataset(label_file, BATCH_SIZE, NUM_EPOCHS, COLUMN_NAMES)

  #make a new data set from our csv by mapping every value to the above function
  split_dataset = csv_dataset.map(split_csv_to_path_and_labels)  

  # make a new datas set that loads our images from the first path 
  image_and_labels_ds = split_dataset.map(load_and_preprocess_image_batch, num_parallel_calls=AUTOTUNE)

  # update our image floating point range to match -1, 1
  ds = image_and_labels_ds.map(change_range)

我第一次尝试这样拆分test/train/val:

BATCH_SIZE = 64
  NUM_EPOCHS = 10
  DATASET_SIZE = ( lenopenreadlines(label_file) - 1) # remove header

  train_size = int(0.7 * DATASET_SIZE)
  val_size = int(0.15 * DATASET_SIZE)
  test_size = int(0.15 * DATASET_SIZE)

  train_dataset = ds.take(train_size)
  test_dataset = ds.skip(train_size)
  val_dataset = test_dataset.skip(test_size)
  test_dataset = test_dataset.take(test_size)

  steps_per_epoch =  int(train_size // BATCH_SIZE) 
  val_steps_per_epoch = int( (val_size // BATCH_SIZE) ) 

  history = model.fit(train_dataset, epochs=NUM_EPOCHS, steps_per_epoch=steps_per_epoch, validation_data=val_dataset, validation_steps=val_steps_per_epoch, validation_freq=NUM_EPOCHS)

在我最后一个训练阶段的最后一步,我收到此错误:

70/71 [============================>.] - ETA: 0s - loss: 1.0760 - acc: 0.8250
WARNING:tensorflow:Your dataset iterator ran out of data; interrupting training. Make sure that your iterator can generate at least `validation_steps * epochs` batches (in this case, 1 batches). You may need touse the repeat() function when building your dataset. WARNING:tensorflow:Your dataset iterator ran out of data; interrupting training. Make sure that your iterator can generate at least `validation_steps * epochs` batches (in this case, 1 batches). You may need touse the repeat() function when building your dataset.

仔细研究一下这个问题,很明显我的一个标签的样本量低于批次计数。但是,上面的错误提到我可以使用ds.repeat(),谷歌搜索类似的问题表明我可以在理论上尝试tf.contrib.data.batch_and_drop_remainder()
然而,我不能让这两个工作。
如果我加上

train_dataset = train_dataset.repeat(1)
  test_dataset = test_dataset.repeat(1)
  val_dataset = val_dataset.repeat(1)

我仍然收到来自上面的Empty Data Set和相同的警告。
如果我使用

train_dataset = train_dataset.repeat()
  test_dataset = test_dataset.repeat()
  val_dataset = val_dataset.repeat()

我收到了一个关于使用无限重复数据集的警告,但它失败了。
如果我加上

train_dataset = train_dataset.apply(tf.contrib.data.batch_and_drop_remainder(BATCH_SIZE))
test_dataset = test_dataset.apply(tf.contrib.data.batch_and_drop_remainder(BATCH_SIZE))
val_dataset = val_dataset.apply(tf.contrib.data.batch_and_drop_remainder(BATCH_SIZE))

我还收到空数据集警告。
此外,如果我不使用val/test分割,我可以训练得很好(使用相同的批量大小和时期步长数学,没有验证和分割,但显然没有人想要那样。

steps_per_epoch =  int( DATASET_SIZE // BATCH_SIZE )
  history = model.fit(ds, epochs=NUM_EPOCHS, steps_per_epoch=steps_per_epoch)

还有什么其他策略可以解决这个问题呢?我的样本量通常在几百到几千之间

vlju58qv

vlju58qv1#

试试看:

val_ds = dataset.unbatch().take(val_size).batch(BATCH_SIZE)
train_ds = dataset.unbatch().skip(val_size).take(train_size).batch(BATCH_SIZE)
test_ds = dataset.unbatch().skip(train_size + val_size).batch(BATCH_SIZE)

.unbatch应该可以修复此问题
这将获取val_size样本,然后跳过这些样本并获取train_size样本,最后剩余数据以test_ds结尾。

相关问题