Paddle fluid.layers.topk出现Begin index must be less than end index in ddim slice.

tzxcd3kk  于 2021-11-29  发布在  Java
关注(0)|答案(13)|浏览(430)

在写loss函数时需要用到fluid.layers.topk,出现了如下错误

paddle.fluid.core.EnforceNotMet: Begin index must be less than end index in ddim slice. at [/paddle/paddle/fluid/framework/ddim.cc:235]

代码是:
errors = fluid.layers.abs(fg - crop_logit)
errors_sorted, perm = fluid.layers.topk(errors,k=errors.shape[0])
errors的维度是103041
把K改为1之后还是一样报错
但如果使用fluid.layers.argsort就不会报错了
麻烦求解答

环境是
cuda9.0 cudnn7.0
paddlepaddle-gpu版本:1.2.1.post97

bksxznpy

bksxznpy1#

建议这里把errors的维度打印出来看下,topk op里面k是对最后一维求topk。

2q5ifsrm

2q5ifsrm2#

您好,不好意思,我上面写错了,应该是长度是103041,维度应该是一维,errors.shape打印出来是(103041L,)这样的

11dmarpk

11dmarpk3#

创建网络的时候,第一维对应batch size,fluid里面会把batch size初始值置为-1
比如我这个代码:
a = fluid.layers.data(name='a', shape=[3,100,100], dtype='float32')
print a.shape
输出是:
(-1L, 3L, 100L, 100L)
你这里的errors.shape (103041, )是这么打印的吗?按我上面的这个示例的话,errors.shape[0]应该是-1

w1jd8yoj

w1jd8yoj4#

您好,我把batch_size设为固定了,然后把所有输入都压为一维之后的值是103041,因为topk的k好像不支持输入为-1,我的打印方式是errors.shape,我把batch_size固定取消之后,压平后的维度是-1,K设为1,报了Expected input_dims[input_dims.size() - 1] >= k这样的错

mutmk8jj

mutmk8jj5#

压平使用的什么操作,reshape还是flatten?

ktca8awb

ktca8awb6#

我使用的是reshape的操作

bttbmeg0

bttbmeg07#

能贴下代码吗,我用你的代码运行下看问题在哪

dy1byipe

dy1byipe8#

def lovasz_softmax(logit,labels):
    """
    多分类的lovasz_softmax
    :param logit:[batch,class_nums,width,height] 1,9,321,321
    :param labels:[batch,width,height] 1,321,321
    :return:loss
    """
    labels = fluid.layers.elementwise_min(
        labels,
        fluid.layers.assign(np.array(
            [num_classes - 1], dtype=np.int32)))
    logit = fluid.layers.transpose(logit, [0, 2, 3, 1])
    logit = fluid.layers.reshape(logit, [-1, num_classes])
    logit = fluid.layers.softmax(logit)
    labels = fluid.layers.reshape(labels, [-1])
    labels = fluid.layers.cast(labels, 'int64')
    C = 9
    losses = []
    for c in range(C):
        # 得到一个类别label的list
        fg = (labels == c).astype('float32')
        # 得到一个类别所有预测结果的分数
        crop_logit = fluid.layers.crop(logit,offsets=[0,c],shape=[103041,1])
        crop_logit.stop_gradient =True
        crop_logit = fluid.layers.reshape(crop_logit,[-1])
        # label的list减去infer的分数
        errors = fluid.layers.abs(fg - crop_logit)
        # print errors.shape
        # # 按从大到小排序
        errors_sorted, perm = fluid.layers.topk(errors,k=errors.shape[0])
        # 把index注入到fg中
        fg_sorted = fluid.layers.gather(fg,perm)
        #  计算梯度
        gts = fluid.layers.reduce_sum(fg_sorted)
        cumsum = fluid.layers.cumsum(fg_sorted.astype("float32"))
        gts = fluid.layers.zeros(cumsum.shape, dtype='float32') + gts
        intersection = gts - cumsum
        union = gts + fluid.layers.cumsum((1 - fg_sorted).astype("float32"))
        jaccard = 1. - intersection / union
        jaccard_0 = fluid.layers.crop(jaccard, offsets=[0], shape=[1])
        jaccard_1 = fluid.layers.crop(jaccard, offsets=[1], shape=[103041])
        jaccard_2 = fluid.layers.crop(jaccard, offsets=[0], shape=[103040])
        jaccard = fluid.layers.concat([jaccard_0, jaccard_1 - jaccard_2], 0)
        losses.append(fluid.layers.elementwise_mul(errors_sorted,jaccard))
    losses = fluid.layers.stack(losses)
    return losses

您好,我想要实现的是分割中使用的lovasz_softmax(多分类),参考是https://github.com/bermanmaxim/LovaszSoftmax/blob/master/tensorflow/lovasz_losses_tf.py
函数在fluid.program_guard中使用的,刚刚接触这个框架,麻烦帮忙调试下,谢谢

polhcujo

polhcujo9#

其他部分的代码也贴下吧,数据可以改成用随机数

vql8enpb

vql8enpb10#


# -- coding:utf-8 --

import paddle.fluid as fluid

def lovasz_softmax(logit,labels):
    """
    多分类的lovasz_softmax
    :param logit:[batch,class_nums,width,height]
    :param labels:[batch,width,height]
    :return:loss
    """
    logit = fluid.layers.transpose(logit, [0, 2, 3, 1])
    logit = fluid.layers.reshape(logit, [-1, 9])
    logit = fluid.layers.softmax(logit)
    labels = fluid.layers.reshape(labels, [-1])
    labels = fluid.layers.cast(labels, 'int64')
    C = 9
    losses = []
    for c in range(C):
        # 得到一个类别label的list
        fg = (labels == c).astype('float32')
        # 得到一个类别所有预测结果的分数
        crop_logit = fluid.layers.crop(logit,offsets=[0,c],shape=[103041,1])
        crop_logit.stop_gradient =True
        crop_logit = fluid.layers.reshape(crop_logit,[-1])
        # label的list减去infer的分数
        errors = fluid.layers.abs(fg - crop_logit)
        # # 按从大到小排序
        errors_sorted, perm = fluid.layers.topk(errors,k=errors.shape[0])
        # 把顺序注入到fg中
        fg_sorted = fluid.layers.gather(fg,perm)
        # # 计算梯度
        gts = fluid.layers.reduce_sum(fg_sorted)
        cumsum = fluid.layers.cumsum(fg_sorted.astype("float32"))
        gts = fluid.layers.zeros(cumsum.shape, dtype='float32') + gts
        intersection = gts - cumsum
        union = gts + fluid.layers.cumsum((1 - fg_sorted).astype("float32"))
        jaccard = 1. - intersection / union
        jaccard_0 = fluid.layers.crop(jaccard, offsets=[0], shape=[1])
        jaccard_1 = fluid.layers.crop(jaccard, offsets=[1], shape=[103041])
        jaccard_2 = fluid.layers.crop(jaccard, offsets=[0], shape=[103040])
        jaccard = fluid.layers.concat([jaccard_0, jaccard_1 - jaccard_2], 0)
        losses.append(fluid.layers.elementwise_mul(errors_sorted,jaccard))
    losses = fluid.layers.stack(losses)
    return losses

sp = fluid.Program()
tp = fluid.Program()

with fluid.program_guard(tp, sp):
    logit = fluid.layers.uniform_random(shape=[1,9,321,321],min=0.0,max=1.0)
    logit = fluid.layers.cast(logit,dtype='float32')
    label = fluid.layers.uniform_random(shape=[1,321,321],min=0.0,max=1.0)
    lovasz_loss = lovasz_softmax(logit,label)

place = fluid.CPUPlace()
exe = fluid.Executor(place)
exe.run(sp)

exe.run(tp)

全部代码就是这样的,麻烦您看看是什么问题

ajsxfq5m

ajsxfq5m11#

我试了下,在Paddle fluid1.2 版本上确实会报错,原因在于[/paddle/paddle/fluid/framework/ddim.cc:235]
PADDLE_ENFORCE(begin < end, "Begin index must be less than end index in ddim slice.");
对ddim做slice的时候要求end > begin,而这里begin和end都等于0。

在新版本fluid 1.3里面已经修复了这个问题,建议使用1.3版。

gab6jxml

gab6jxml13#

您好,现在官网好像还没有1.3的版本,1.3的版本要怎么安装呢?

相关问题