TensorFlow:不可重复结果

wfypjpf4  于 2022-11-16  发布在  其他
关注(0)|答案(5)|浏览(119)

问题
我有一个Python脚本,它使用TensorFlow来创建一个多层感知器网络(带有丢弃),以便进行二进制分类。尽管我已经小心地设置了Python和TensorFlow种子,但我得到的结果是不可重复的。如果我运行一次,然后再次运行,我得到的结果会不同。我甚至可以运行一次,退出Python,重新启动Python,再次运行,得到不同的结果。
我尝试过的
我知道有些人发布了关于在TensorFlow中获得不可重复结果的问题(例如,"How to get stable results...""set_random_seed not working...""How to get reproducible result in TensorFlow"),而答案通常是对tf.set_random_seed()的错误使用/理解。我已确保实施给出的解决方案,但这并没有解决我的问题。
一个常见的错误是没有意识到tf.set_random_seed()只是一个图形级的种子,多次运行脚本将改变图形,从而解释了不可重复的结果。我使用以下语句打印出整个图形,并(通过diff)验证了即使结果不同,图形也是相同的。

print [n.name for n in tf.get_default_graph().as_graph_def().node]

我还使用了tf.reset_default_graph()tf.get_default_graph().finalize()这样的函数调用来避免对图形进行任何更改,尽管这样做可能有些过头。

(相关)代码

我的脚本大约有360行,所以这里是相关的代码行(显示了代码片段)。ALL_CAPS中的任何项都是在下面的Parameters块中定义的常量。

import numpy as np
import tensorflow as tf

from copy import deepcopy
from tqdm import tqdm  # Progress bar

# --------------------------------- Parameters ---------------------------------
(snip)

# --------------------------------- Functions ---------------------------------
(snip)

# ------------------------------ Obtain Train Data -----------------------------
(snip)

# ------------------------------ Obtain Test Data -----------------------------
(snip)

random.seed(12345)
tf.set_random_seed(12345)

(snip)

# ------------------------- Build the TensorFlow Graph -------------------------

tf.reset_default_graph()

with tf.Graph().as_default():
    
    x = tf.placeholder("float", shape=[None, N_INPUT])
    y_ = tf.placeholder("float", shape=[None, N_CLASSES])
    
    # Store layers weight & bias
    weights = {
        'h1': tf.Variable(tf.random_normal([N_INPUT, N_HIDDEN_1])),
        'h2': tf.Variable(tf.random_normal([N_HIDDEN_1, N_HIDDEN_2])),
        'h3': tf.Variable(tf.random_normal([N_HIDDEN_2, N_HIDDEN_3])),
        'out': tf.Variable(tf.random_normal([N_HIDDEN_3, N_CLASSES]))
    }
    
    biases = {
        'b1': tf.Variable(tf.random_normal([N_HIDDEN_1])),
        'b2': tf.Variable(tf.random_normal([N_HIDDEN_2])),
        'b3': tf.Variable(tf.random_normal([N_HIDDEN_3])),
        'out': tf.Variable(tf.random_normal([N_CLASSES]))
    }

# Construct model
    pred = multilayer_perceptron(x, weights, biases, USE_DROP_LAYERS, DROP_KEEP_PROB)
    
    mean1 = tf.reduce_mean(weights['h1'])
    mean2 = tf.reduce_mean(weights['h2'])
    mean3 = tf.reduce_mean(weights['h3'])

    cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(pred, y_))
    
    regularizers = (tf.nn.l2_loss(weights['h1']) + tf.nn.l2_loss(biases['b1']) +
                    tf.nn.l2_loss(weights['h2']) + tf.nn.l2_loss(biases['b2']) +
                    tf.nn.l2_loss(weights['h3']) + tf.nn.l2_loss(biases['b3']))
    
    cost += COEFF_REGULAR * regularizers
    
    optimizer = tf.train.GradientDescentOptimizer(LEARNING_RATE).minimize(cost)
    
    out_labels = tf.nn.softmax(pred)
    
    sess = tf.InteractiveSession()
    sess.run(tf.initialize_all_variables())
    
    tf.get_default_graph().finalize()  # Lock the graph as read-only
    
    #Print the default graph in text form    
    print [n.name for n in tf.get_default_graph().as_graph_def().node]
    
    # --------------------------------- Training ----------------------------------
    
    print "Start Training"
    pbar = tqdm(total = TRAINING_EPOCHS)
    for epoch in range(TRAINING_EPOCHS):
        avg_cost = 0.0
        batch_iter = 0
        
        train_outfile.write(str(epoch))
        
        while batch_iter < BATCH_SIZE:
            train_features = []
            train_labels = []
            batch_segments = random.sample(train_segments, 20)
            for segment in batch_segments:
                train_features.append(segment[0])
                train_labels.append(segment[1])
            sess.run(optimizer, feed_dict={x: train_features, y_: train_labels})
            line_out = "," + str(batch_iter) + "\n"
            train_outfile.write(line_out)
            line_out = ",," + str(sess.run(mean1, feed_dict={x: train_features, y_: train_labels}))
            line_out += "," + str(sess.run(mean2, feed_dict={x: train_features, y_: train_labels}))
            line_out += "," + str(sess.run(mean3, feed_dict={x: train_features, y_: train_labels})) + "\n"
            train_outfile.write(line_out)
            avg_cost += sess.run(cost, feed_dict={x: train_features, y_: train_labels})/BATCH_SIZE
            batch_iter += 1
    
        line_out = ",,,,," + str(avg_cost) + "\n"
        train_outfile.write(line_out)
        pbar.update(1)  # Increment the progress bar by one
    
    train_outfile.close()
    print "Completed training"

# ------------------------------ Testing & Output ------------------------------

keep_prob = 1.0  # Do not use dropout when testing

print "now reducing mean"
print(sess.run(mean1, feed_dict={x: test_features, y_: test_labels}))

print "TRUE LABELS"
print(test_labels)
print "PREDICTED LABELS"
pred_labels = sess.run(out_labels, feed_dict={x: test_features})
print(pred_labels)

output_accuracy_results(pred_labels, test_labels)

sess.close()

不可重复的东西

正如您所看到的,我将每个历元的结果输出到一个文件中,并在最后打印出精度数字。尽管我相信我已经正确地设置了种子,但这些数字在运行之间都不匹配。我使用了random.seed(12345)tf.set_random_seed(12345)

设置详情

TensorFlow版本0.8.0(仅CPU)
Enthought Canopy版本1.7.2(Python 2.7,而不是Python 3.+)
Mac OS X版本10.11.3

k10s72fa

k10s72fa1#

除图形级种子外,还需要设置操作级种子,即

tf.reset_default_graph()
a = tf.constant([1, 1, 1, 1, 1], dtype=tf.float32)
graph_level_seed = 1
operation_level_seed = 1
tf.set_random_seed(graph_level_seed)
b = tf.nn.dropout(a, 0.5, seed=operation_level_seed)
i7uaboj4

i7uaboj42#

GPU上的一些运算并不完全确定(速度与精度)。
我还观察到,要使seed产生任何效果,必须在创建Session之前调用tf.set_random_seed(...)。而且,每次运行代码时,您应该完全重新启动python解释器,或者在开始时调用tf.reset_default_graph()

oalqel3c

oalqel3c3#

在TensorFlow 2.0中,tf.set_random_seed(42)已更改为tf.random.set_seed(42)
https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/random/set_seed
如果只使用TensorFlow,这应该是唯一需要的种子。

ac1kyiln

ac1kyiln4#

为了补充Yaroslav的答案,除了操作和图级种子之外,还应该设置numpy种子,因为一些后端操作依赖于numpy。

gupuwyp2

gupuwyp25#

我使用tensorflow 训练和测试一个巨大的深度网络,以获得可重复的结果。

  • 这是在Ubuntu 16.04tensorflow 1.9.0python 2.7上进行测试的,同时使用GPUCPU
  • 在代码中执行任何操作之前添加以下代码行(main函数的前几行)
import os
import random
import numpy as np
import tensorflow as tf

SEED = 1  # use this constant seed everywhere

os.environ['PYTHONHASHSEED'] = str(SEED)
random.seed(SEED)  # `python` built-in pseudo-random generator
np.random.seed(SEED)  # numpy pseudo-random generator
tf.set_random_seed(SEED)  # tensorflow pseudo-random generator
  • 启动会话前重置默认图形
tf.reset_default_graph()  # this goes before sess = tf.Session()
  • 在你的代码中找到所有接受seed作为参数的tensorflow 函数,把你的常量seed放在所有这些函数中(在我的代码中使用的是SEED

下面是其中的一些函数:tf.nn.dropouttf.contrib.layers.xavier_initializer等等。

注意:这一步看起来不合理,因为我们已经使用tf.set_random_seed为tensorflow设置了一个种子,但是相信我,你需要这个!请看Yaroslav的答案。

相关问题