如何解决“No Algorithm Worked”Keras错误?

eqfvzcg8  于 2023-06-06  发布在  Go
关注(0)|答案(8)|浏览(548)

我尝试在Keras中开发FCN-16模型。我用类似的FCN-16模型权重初始化权重。

def FCN8 (nClasses, input_height=256, input_width=256):

    ## input_height and width must be devisible by 32 because maxpooling with filter size = (2,2) is operated 5 times,
    ## which makes the input_height and width 2^5 = 32 times smaller
    assert input_height % 32 == 0
    assert input_width % 32 == 0
    IMAGE_ORDERING = "channels_last"

    img_input = Input(shape=(input_height, input_width, 3))  ## Assume 224,224,3

    ## Block 1
    x = Conv2D(64, (3, 3), activation='relu', padding='same', name='conv1_1', data_format=IMAGE_ORDERING)(
        img_input)
    x = Conv2D(64, (3, 3), activation='relu', padding='same', name='conv1_2', data_format=IMAGE_ORDERING)(x)
    x = MaxPooling2D((2, 2), strides=(2, 2), name='block1_pool', data_format=IMAGE_ORDERING)(x)
    f1 = x

    # Block 2
    x = Conv2D(128, (3, 3), activation='relu', padding='same', name='conv2_1', data_format=IMAGE_ORDERING)(x)
    x = Conv2D(128, (3, 3), activation='relu', padding='same', name='conv2_2', data_format=IMAGE_ORDERING)(x)
    x = MaxPooling2D((2, 2), strides=(2, 2), name='block2_pool', data_format=IMAGE_ORDERING)(x)
    f2 = x

    # Block 3
    x = Conv2D(256, (3, 3), activation='relu', padding='same', name='conv3_1', data_format=IMAGE_ORDERING)(x)
    x = Conv2D(256, (3, 3), activation='relu', padding='same', name='conv3_2', data_format=IMAGE_ORDERING)(x)
    x = Conv2D(256, (3, 3), activation='relu', padding='same', name='conv3_3', data_format=IMAGE_ORDERING)(x)
    x = MaxPooling2D((2, 2), strides=(2, 2), name='block3_pool', data_format=IMAGE_ORDERING)(x)
    pool3 = x

    # Block 4
    x = Conv2D(512, (3, 3), activation='relu', padding='same', name='conv4_1', data_format=IMAGE_ORDERING)(x)
    x = Conv2D(512, (3, 3), activation='relu', padding='same', name='conv4_2', data_format=IMAGE_ORDERING)(x)
    x = Conv2D(512, (3, 3), activation='relu', padding='same', name='conv4_3', data_format=IMAGE_ORDERING)(x)
    pool4 = MaxPooling2D((2, 2), strides=(2, 2), name='block4_pool', data_format=IMAGE_ORDERING)(
        x)  ## (None, 14, 14, 512)

    # Block 5
    x = Conv2D(512, (3, 3), activation='relu', padding='same', name='conv5_1', data_format=IMAGE_ORDERING)(pool4)
    x = Conv2D(512, (3, 3), activation='relu', padding='same', name='conv5_2', data_format=IMAGE_ORDERING)(x)
    x = Conv2D(512, (3, 3), activation='relu', padding='same', name='conv5_3', data_format=IMAGE_ORDERING)(x)
    pool5 = MaxPooling2D((2, 2), strides=(2, 2), name='block5_pool', data_format=IMAGE_ORDERING)(
        x) 

    n = 4096
    o = (Conv2D(n, (7, 7), activation='relu', padding='same', name="fc6", data_format=IMAGE_ORDERING))(pool5)
    conv7 = (Conv2D(n, (1, 1), activation='relu', padding='same', name="fc7", data_format=IMAGE_ORDERING))(o)

    conv7 = (Conv2D(nClasses, (1, 1), activation='relu', padding='same', name="conv7_1", data_format=IMAGE_ORDERING))(conv7)

    conv7_4 = Conv2DTranspose(nClasses, kernel_size=(2, 2), strides=(2, 2),  data_format=IMAGE_ORDERING)(
        conv7)

    pool411 = (
        Conv2D(nClasses, (1, 1), activation='relu', padding='same', name="pool4_11",use_bias=False, data_format=IMAGE_ORDERING))(pool4)

    o = Add(name="add")([pool411, conv7_4])

    o = Conv2DTranspose(nClasses, kernel_size=(16, 16), strides=(16, 16), use_bias=False, data_format=IMAGE_ORDERING)(o)
    o = (Activation('softmax'))(o)

    GDI= Model(img_input, o)
    GDI.load_weights(Model_Weights_path)

    model = Model(img_input, o)

    return model

然后我做了训练,测试分割,并尝试运行模型:

from keras import optimizers

sgd = optimizers.SGD(lr=1E-2, momentum=0.91,decay=5**(-4), nesterov=True)

model.compile(optimizer='sgd',loss='categorical_crossentropy',metrics=['accuracy'],)

hist1 = model.fit(X_train,y_train,validation_data=(X_test,y_test),batch_size=32,epochs=1000,verbose=2)

model.save("/content/drive/My Drive/HCI_prep/new.h5")

但这段代码在第一个epoch中抛出错误:
NotFoundError:找到2个根本错误。(0)未找到:算法无效!{{node pool4_11_3/Conv2D}}loss_4/穆尔/_629(1)Not found:算法无效!{{node pool4_11_3/Conv 2D}} 0次成功操作。忽略0个派生错误。

55ooxyrt

55ooxyrt1#

在代码中添加以下内容:

from tensorflow.compat.v1 import ConfigProto
from tensorflow.compat.v1 import InteractiveSession

config = ConfigProto()
config.gpu_options.allow_growth = True
session = InteractiveSession(config=config)

然后重新启动Python内核。

pbgvytdp

pbgvytdp2#

有同样的问题。
MaxPooling的padding='same'不适合我。
我将训练和测试生成器中的color_mode参数从'rgb'更改为'grayscale',然后它对我有效。

a64a0gku

a64a0gku3#

这对我很有效:

import tensorflow as tf
    physical_devices = tf.config.list_physical_devices('GPU')
    tf.config.experimental.set_memory_growth(physical_devices[0], True)
zed5wv10

zed5wv104#

在我的例子中,这是通过结束所有进程来解决的,这些进程仍然在其中一个GPU上分配内存。显然,其中一个没有完成(正确)。我不需要修改任何代码。

eufgjt7s

eufgjt7s5#

我的问题是,我调用了模型的input_shape(?28.第28章:你叫他什么?、28、28、3)。

f0ofjuux

f0ofjuux6#

作为参考,修复错误的完整代码如下:

import tensorflow.keras
from tensorflow.keras.models import *
from tensorflow.keras.layers import *

IMAGE_ORDERING = 'channels_last'

# take vgg-16 pretrained model from "https://github.com/fchollet/deep-learning-models" here
pretrained_url = "https://github.com/fchollet/deep-learning-models/" \
                 "releases/download/v0.1/" \
                 "vgg16_weights_tf_dim_ordering_tf_kernels_notop.h5"

pretrained = 'imagenet'  # 'imagenet' if weights need to be initialized!

"""
Function Name: get_vgg_encoder()
Functionalities: This function defines the VGG encoder part of the FCN network
                 and initialize this encoder part with VGG pretrained weights.
Parameter:input_height=224,  input_width=224, pretrained=pretrained
Returns: final layer of every blocks as f1,f2,f3,f4,f5
"""

def get_vgg_encoder(input_height=224, input_width=224, pretrained=pretrained):
    pad = 1

    # heights and weights must be divided by 32, for fcn
    assert input_height % 32 == 0
    assert input_width % 32 == 0

    img_input = Input(shape=(input_height, input_width, 3))

    # Unlike base paper, stride=1 has not been used here, because
    # Keras has default stride=1

    x = (ZeroPadding2D((pad, pad), data_format=IMAGE_ORDERING))(img_input)
    x = Conv2D(64, (3, 3), activation='relu', padding='valid', name='block1_conv1', data_format=IMAGE_ORDERING)(x)
    x = Conv2D(64, (3, 3), activation='relu', padding='same', name='block1_conv2', data_format=IMAGE_ORDERING)(x)
    x = MaxPooling2D((2, 2), strides=(2, 2), name='block1_pool', data_format=IMAGE_ORDERING)(x)
    f1 = x
    # Block 2
    x = Conv2D(128, (3, 3), activation='relu', padding='same', name='block2_conv1', data_format=IMAGE_ORDERING)(x)
    x = Conv2D(128, (3, 3), activation='relu', padding='same', name='block2_conv2', data_format=IMAGE_ORDERING)(x)
    x = MaxPooling2D((2, 2), strides=(2, 2), name='block2_pool', data_format=IMAGE_ORDERING)(x)
    f2 = x

    # Block 3
    x = Conv2D(256, (3, 3), activation='relu', padding='same', name='block3_conv1', data_format=IMAGE_ORDERING)(x)
    x = Conv2D(256, (3, 3), activation='relu', padding='same', name='block3_conv2', data_format=IMAGE_ORDERING)(x)
    x = Conv2D(256, (3, 3), activation='relu', padding='same', name='block3_conv3', data_format=IMAGE_ORDERING)(x)
    x = MaxPooling2D((2, 2), strides=(2, 2), name='block3_pool', data_format=IMAGE_ORDERING)(x)
    x = Dropout(0.5)(x)
    f3 = x

    # Block 4
    x = Conv2D(512, (3, 3), activation='relu', padding='same', name='block4_conv1', data_format=IMAGE_ORDERING)(x)
    x = Conv2D(512, (3, 3), activation='relu', padding='same', name='block4_conv2', data_format=IMAGE_ORDERING)(x)
    x = Conv2D(512, (3, 3), activation='relu', padding='same', name='block4_conv3', data_format=IMAGE_ORDERING)(x)
    x = MaxPooling2D((2, 2), strides=(2, 2), name='block4_pool', data_format=IMAGE_ORDERING)(x)
    f4 = x

    # Block 5
    x = Conv2D(512, (3, 3), activation='relu', padding='same', name='block5_conv1', data_format=IMAGE_ORDERING)(x)
    x = Conv2D(512, (3, 3), activation='relu', padding='same', name='block5_conv2', data_format=IMAGE_ORDERING)(x)
    x = Conv2D(512, (3, 3), activation='relu', padding='same', name='block5_conv3', data_format=IMAGE_ORDERING)(x)
    x = MaxPooling2D((2, 2), strides=(2, 2), name='block5_pool', data_format=IMAGE_ORDERING)(x)
    # x= Dropout(0.5)(x)

    f5 = x

    # Check if weights are initialised, model is learning!
    if pretrained == 'imagenet':
        VGG_Weights_path = tensorflow.keras.utils.get_file(
            pretrained_url.split("/")[-1], pretrained_url)

        Model(img_input, x).load_weights(VGG_Weights_path)

    return img_input, [f1, f2, f3, f4, f5]

"""
Function Name: fcn_16()
Functionalities: This function defines the Fully Convolutional part of the FCN network
                 and adds skip connections to build FCN-16 network
Parameter:n_classes, encoder=get_vgg_encoder, input_height=224,input_width=224
Returns: model
"""

def fcn_16(n_classes, encoder=get_vgg_encoder, input_height=224, input_width=224):
    # Take levels from the base model, i.e. vgg
    img_input, levels = encoder(input_height=input_height, input_width=input_width)
    [f1, f2, f3, f4, f5] = levels

    o = f5

    # fcn6
    o = (Conv2D(4096, (7, 7), activation='relu', padding='same', data_format=IMAGE_ORDERING))(o)
    o = Dropout(0.5)(o)

    # fc7
    o = (Conv2D(4096, (1, 1), activation='relu', padding='same', data_format=IMAGE_ORDERING))(o)
    o = Dropout(0.3)(o)

    conv7 = (Conv2D(1, (1, 1), activation='relu', padding='same', name="score_sal", data_format=IMAGE_ORDERING))(o)

    conv7_4 = Conv2DTranspose(1, kernel_size=(4, 4), strides=(2, 2), padding='same', name="upscore_sal2",
                              use_bias=False, data_format=IMAGE_ORDERING)(conv7)

    pool411 = (
        Conv2D(1, (1, 1), activation='relu', padding='same', name="score_pool4", data_format=IMAGE_ORDERING))(f4)

    # Add a crop layer 
    o, o2 = crop(pool411, conv7_4, img_input)

    # add skip connection
    o = Add()([o, o2])

    # 16 x upsample
    o = Conv2DTranspose(n_classes, kernel_size=(32, 32), strides=(16, 16), use_bias=False, data_format=IMAGE_ORDERING)(
        o)

    # crop layer
    ## Caffe calls crop layer that takes o and img_input as argument, it takes their difference and crops
    ## But keras takes it as touple, I checked the size diff and put this value manually.
    ## output dim was 240 , input dim was 224. 240-224=16. so 16/2=8

    score = Cropping2D(cropping=((8, 8), (8, 8)), data_format=IMAGE_ORDERING)(o)

    o = (Activation('sigmoid'))(score)
    model = Model(img_input, o)

    model.model_name = "fcn_16"

    return model
s1ag04yj

s1ag04yj7#

这个错误是相当普遍的,基本上表明“某些东西”出错了。因为,各种各样的答案表明,错误可能是由于实现与keras/tensorflow的底层版本不兼容,或者过滤器大小不正确,或者......
没有单一的解决办法。对我来说,这也是一个输入形状的问题。而不是使用rgb使用grayscale作为网络预期的1个通道。

luaexgnf

luaexgnf8#

就像@Soren说的,这个错误取决于各种情况。这可能是由于VRAM缺陷、v1和v2不兼容、调用convpool时的形状问题等。
在我的例子中,出现这个错误是因为我在Python 3.6(和TF 2.4.2)中调用保存的模型进行推理,而我的模型是在Python 3.10(和TF 2.8)中训练的,并且Keras/TF的函数API中完成的一些灵活的形状操作与旧版本的TF不向后兼容。
因此,我还建议检查您的训练和推理环境,以确保没有错误或不匹配。

相关问题