拟合时的TensorflowJS,损失为NaN

ttvkxqim  于 2022-11-16  发布在  其他
关注(0)|答案(2)|浏览(112)

我尝试使用TensorflowJS创建Python版本Tensorflow的相同示例。不幸的是,当我运行脚本时,我不知道为什么训练时记录的损失值为NaN。
我想要实现的是一个简单的文本分类,它根据训练好的模型返回0或1。
这就是我目前为止翻译的代码:

import * as tf  from '@tensorflow/tfjs'

// Load the binding:
//require('@tensorflow/tfjs-node');  // Use '@tensorflow/tfjs-node-gpu' if running with GPU.

// utils
const tuple = <A, B>(a: A, b: B): [A, B] => [a, b]

// prepare the data, first is result, second is the raw text
const data: [number, string][] = [
    [0, 'aaaaaaaaa'],
    [0, 'aaaa'],
    [1, 'bbbbbbbbb'],
    [1, 'bbbbbb']
]

// normalize the data
const arrayFill = [-1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1]
const normalizeData = data.map(item => {
    return tuple(item[0], item[1].split('').map(c => c.charCodeAt(0)).concat(arrayFill).slice(0, 10))
})

const xs = tf.tensor(normalizeData.map(i => i[1]))
const ys = tf.tensor(normalizeData.map(i => i[0]))

console.log(xs)

// Configs
const LEARNING_RATE = 1e-4

// Train a simple model:
//const optimizer = tf.train.adam(LEARNING_RATE)
const model = tf.sequential();
model.add(tf.layers.embedding({inputDim: 1000, outputDim: 16}))
model.add(tf.layers.globalAveragePooling1d({}))
model.add(tf.layers.dense({units: 16, activation: 'relu'}))
model.add(tf.layers.dense({units: 1, activation: 'sigmoid'}))
model.summary()
model.compile({optimizer: 'adam', loss: 'binaryCrossentropy', metrics: ['accuracy']});

model.fit(xs, ys, {
  epochs: 10,
  validationData: [xs, ys],
  callbacks: {
    onEpochEnd: async (epoch, log) => {
      console.log(`Epoch ${epoch}: loss = ${log.loss}`);
    }
  }
});

(here pure JS code)这就是我得到的输出

_________________________________________________________________
Layer (type)                 Output shape              Param #
=================================================================
embedding_Embedding1 (Embedd [null,null,16]            16000
_________________________________________________________________
global_average_pooling1d_Glo [null,16]                 0
_________________________________________________________________
dense_Dense1 (Dense)         [null,16]                 272
_________________________________________________________________
dense_Dense2 (Dense)         [null,1]                  17
=================================================================
Total params: 16289
Trainable params: 16289
Non-trainable params: 0
_________________________________________________________________
Epoch 0: loss = NaN
Epoch 1: loss = NaN
Epoch 2: loss = NaN
Epoch 3: loss = NaN
Epoch 4: loss = NaN
Epoch 5: loss = NaN
Epoch 6: loss = NaN
Epoch 7: loss = NaN
Epoch 8: loss = NaN
Epoch 9: loss = NaN
dced5bon

dced5bon1#

损失或预测可能变为NaN。这是vanishing gradient问题的结果。在训练过程中,梯度(偏导数)可能变得很小(趋于0)。binarycrossentropy损失函数在计算中使用对数。根据涉及对数的数学运算,结果可能为NaN。

如果模型的权重变为NaN,预测y也可能变为NaN,损失也可能变为NaN。可以调整时段的数量来避免这个问题。另一种解决这个问题的方法是改变损失或优化器函数。
话虽如此,代码的损失并不是NaN。下面是代码在stackblitz上的执行。另外,请注意下面的answer,为了不预测NaN,对模型进行了更正

u3r8eeie

u3r8eeie2#

以防其他人遇到同样的问题:我的错误是我忘了normalize用于训练的数据集;我使用的是Boston Housing问题。请看这里。
点击“请给我看所有的盘子”。

相关问题