I am facing an issue when trying to use CuDNNLSTM instead of keras.layers.LSTM.
This is the error I am getting:
Failed to call ThenRnnForward with model config: [rnn_mode, rnn_input_mode, rnn_direction_mode]: 2, 0, 0 , [num_layers, input_size, num_units, dir_count, seq_length, batch_size]: [1, 300, 512, 1, 5521, 128] {{node bidirectional_1/CudnnRNN_1}} = CudnnRNN[T=DT_FLOAT, _class=["loc:@train...NNBackprop"], direction="unidirectional", dropout=0, input_mode="linear_input", is_training=true, rnn_mode="lstm", seed=87654321, seed2=0, _device="/job:localhost/replica:0/task:0/device:GPU:0"](bidirectional_1/transpose_1, bidirectional_1/ExpandDims_1, bidirectional_1/ExpandDims_1, bidirectional_1/concat_1){{node loss/mul/_75}} = _Recvclient_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device_incarnation=1, tensor_name="edge_1209_loss/mul", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:CPU:0"
Also, I got this error in one of the runs:
InternalError: GPU sync failed
And the kernel kept dying after each run.
I only started getting this error when I tried to run it on a VM instance on google cloud with CuDNNLSTM.
my code is:
MAX_LEN = max(len(article) for article in X_train_tokens)
EMBEDDING_DIM=300
vocab_size = len(word_to_id)
classes = 2
# Text input
text_input = Input(shape=(MAX_LEN,))
embedding = Embedding(vocab_size, EMBEDDING_DIM, input_length=MAX_LEN)(text_input)
x = Bidirectional(LSTM(512, return_sequences=False))(embedding)
pred = Dense(2, activation='softmax')(x)
model = Model(inputs=[text_input],outputs=pred)
model.compile(loss='categorical_crossentropy', optimizer='RMSprop', metrics=['accuracy'])
batch_size = 128
generator = text_training_generator(batch_size)
steps = len(X_train)/ batch_size
model.fit_generator(generator, steps_per_epoch=steps, verbose=True, epochs=10)
The model summary:
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_1 (InputLayer) (None, 5521) 0
_________________________________________________________________
embedding_1 (Embedding) (None, 5521, 300) 8099100
_________________________________________________________________
bidirectional_1 (Bidirection (None, 1024) 3330048
_________________________________________________________________
dense_1 (Dense) (None, 2) 2050
=================================================================
Total params: 11,431,198
Trainable params: 11,431,198
Non-trainable params: 0
_________________________________________________________________
3条答案
按热度按时间50pmv0ei1#
可能是你的gpu内存不足。你的网络非常大,有1100万个可训练参数。你真的需要一个512*2的循环层输出吗?
此外,你的embedding_dim也很大,而你的词汇量很小,只有5k个单词。我猜你的网络对于你的问题来说太复杂了。我建议你尝试32个嵌入大小和32个LSTM大小作为开始。如果你的准确性仍然很差,你可以增加复杂度。
gudnpqoy2#
最近我的模型和Tensorflow 2.4.1也遇到了这个问题;我还发现它是可复制的,例如教程Text generation with an RNN中的模型。在CPU上运行(并消耗~3 GB RAM),在8 GB内存的GPU上训练失败,错误为
我还观察到GPU内存填充到错误前
model.compile()
调用的限制。我通过禁止分配全部GPU内存解决了这个问题,方法是添加
在脚本中足够早的时间(例如,在
import tensorflow as tf
之后)。这指示Tensorflow按需分配GPU内存。这样,训练在GPU上运行,仅消耗约2.2 GB内存。sauutmhj3#
试着将批量大小减少到16。