Tensorflow v2示例序列丢失的备选方案

s4n0splo  于 2022-11-16  发布在  其他
关注(0)|答案(1)|浏览(169)

我正在探索下面的tensorflow 示例:https://github.com/googledatalab/notebooks/blob/master/samples/TensorFlow/LSTM%20Punctuation%20Model%20With%20TensorFlow.ipynb,它显然是用tf v1编写的,所以我用v2升级脚本升级,有三个主要的不一致:

ERROR: Using member tf.contrib.rnn.DropoutWrapper in deprecated module tf.contrib. tf.contrib.rnn.DropoutWrapper cannot be converted automatically. tf.contrib will not be distributed with TensorFlow 2.0, please consider an alternative in non-contrib TensorFlow, a community-maintained repository such as tensorflow/addons, or fork the required code.
ERROR: Using member tf.contrib.legacy_seq2seq.sequence_loss_by_example in deprecated module tf.contrib. tf.contrib.legacy_seq2seq.sequence_loss_by_example cannot be converted automatically. tf.contrib will not be distributed with TensorFlow 2.0, please consider an alternative in non-contrib TensorFlow, a community-maintained repository such as tensorflow/addons, or fork the required code.
ERROR: Using member tf.contrib.framework.get_or_create_global_step in deprecated module tf.contrib. tf.contrib.framework.get_or_create_global_step cannot be converted automatically. tf.contrib will not be distributed with TensorFlow 2.0, please consider an alternative in non-contrib TensorFlow, a community-maintained repository such as tensorflow/addons, or fork the required code.

因此,为了兼容性,我手动将framework.get_or_create_global_step替换为tf.compat.v1.train.get_or_create_global_step,并将rnn.DropoutWrapper替换为tf.compat.v1.nn.rnn_cell.DropoutWrapper
但是我无法找到一个解决方案来处理tf.contrib.legacy_seq2seq.sequence_loss_by_example方法,因为我找不到一个向后兼容的替代方法。我尝试安装Tensroflow Addons并使用its seq2seq loss function,但无法找到如何使其适应其余代码。
偶然发现了一些错误,如Consider casting elements to a supported type.Logits must be a [batch_size x sequence_length x logits] tensor,因为可能我没有正确地实现一些东西。
所以我的问题是:你知道legacy_seq2seq.sequence_loss_by_example是否通过一些我可以使用的第三方插件/库来支持吗?更重要的是,有人能告诉我如何实现这个损失函数的支持tensorflow v2替代方案吗?所以它的行为类似于下面的代码。

output = tf.reshape(tf.concat(axis=1, values=outputs), [-1, size])
    softmax_w = tf.compat.v1.get_variable("softmax_w", [size, len(TARGETS)], dtype=tf.float32)
    softmax_b = tf.compat.v1.get_variable("softmax_b", [len(TARGETS)], dtype=tf.float32)
    logits = tf.matmul(output, softmax_w) + softmax_b
    self._predictions = tf.argmax(input=logits, axis=1)    
    self._targets = tf.reshape(input_.targets, [-1])
    loss = tfa.seq2seq.sequence_loss(
        [logits],
        [tf.reshape(input_.targets, [-1])],
        [tf.ones([batch_size * num_steps], dtype=tf.float32)])
    self._cost = cost = tf.reduce_sum(input_tensor=loss) / batch_size
    self._final_state = state
gfttwv5a

gfttwv5a1#

首先安装tensorflow附加组件,使用:

pip install tensorflow-addons

然后将其导入到程序中:

import tensorflow_addons as tfa

并用途:

tfa.seq2seq.sequence_loss

相关问题