Paddle复现stylegan2-ada,与pytorch有相同的输出却有不同的梯度。

cpjpxq1n  于 2023-02-04  发布在  其他
关注(0)|答案(6)|浏览(306)

代码地址: https://github.com/miemie2013/stylegan2-ada-pytorch_m2

stylegan2-ada中的SynthesisNetwork,计算loss_Gpl时会求其梯度进行计算。我用Paddle复现了SynthesisNetwork,与pytorch有相同的输出却有不同的梯度。

环境:Python3.9、paddlepaddle-gpu==2.2.0

首先运行test2_SynthesisNetwork_grad.py,保存pytorch版SynthesisNetwork的权重"pytorch_synthesis.pth",跑一次前向,保存输入、输出、输出对输入的梯度、部分层输出对输入的梯度为文件synthesis_grad.npz;
接着,跑test2_SynthesisNetwork_grad_2paddle.py,建立paddle版SynthesisNetwork网络,将"pytorch_synthesis.pth"的权重移植到paddle,保存为文件"pytorch_synthesis.pdparams";
最后,跑test2_SynthesisNetwork_grad_paddle.py,建立paddle版SynthesisNetwork网络,加载"pytorch_synthesis.pdparams"的权重,读取synthesis_grad.npz中保存的输入作为输入,跑一次前向。即此时paddle版SynthesisNetwork与pytorch版SynthesisNetwork有相同的权重,相同的输入,我们期望paddle版SynthesisNetwork与pytorch版SynthesisNetwork有相同的输出、相同的梯度。但是跑一次前向的结果发现,paddle版SynthesisNetwork与pytorch版SynthesisNetwork有相同的输出,却有不同的梯度。当批大小为2时,dimg_dws_paddle[0][15]和dimg_dws_paddle[1][15]与pytorch版SynthesisNetwork有相同的梯度,但是dimg_dws_paddle其他位置的梯度全部为0,即只有最后一个ws(ws[:, -1, :])有梯度。

备注:网络中设置SynthesisLayer的self.use_noise = False,不使用噪声,避免随机数的干扰;paddle版SynthesisNetwork中一些op没有二阶梯度,但是可以用一些已经实现二阶梯度的等价op来实现,比如x = x[:, :, ::downy, ::downx](strided_slice)可以用等价的固定卷积核的分组卷积实现,addmm()可以用matmul()和+实现。paddle版SynthesisNetwork与pytorch版SynthesisNetwork有不同的梯度,并不是这些等价实现导致的,假如换回原始实现strided_slice、addmm(),paddle.grad()的create_graph设置为False,结果还是一样。

另外,如果注释掉StyleGANv2ADA_SynthesisNetwork的forward(self, ws, **block_kwargs)里的ws = paddle.cast(ws, dtype='float32')这一行类型转换的代码,会发现dimg_dws_paddle的16个梯度里(第1维)间隔地出现全0、非0、全0、非0、...的情况,真的好谜,望得到官方的帮助!

ghg1uchk

ghg1uchk1#

您好,我们已经收到了您的问题,会安排技术人员尽快解答您的问题,请耐心等待。请您再次检查是否提供了清晰的问题描述、复现代码、环境&版本、报错信息等。同时,您也可以通过查看 官网API文档常见问题历史IssueAI社区 来寻求解答。祝您生活愉快~

Hi! We've received your issue and please be patient to get responded. We will arrange technicians to answer your questions as soon as possible. Please make sure that you have posted enough message to demo your request. You may also check out the APIFAQGithub Issue and AI community to get the answer.Have a nice day!

ut6juiuv

ut6juiuv3#

目前是反向对齐出现的问题,建议用二分的方法查找diff,定位出PaddlePaddle与PyTorch梯度无法对齐的API或者操作,然后进一步验证并反馈。
相关文档可以参考这个: https://github.com/PaddlePaddle/models/blob/release/2.2/docs/lwfx/ArticleReproduction_CV.md#3.8

xxls0lw8

xxls0lw84#

目前是反向对齐出现的问题,建议用二分的方法查找diff,定位出PaddlePaddle与PyTorch梯度无法对齐的API或者操作,然后进一步验证并反馈。 相关文档可以参考这个: https://github.com/PaddlePaddle/models/blob/release/2.2/docs/lwfx/ArticleReproduction_CV.md#3.8

已经定位到更细粒度的差别,为SynthesisLayer中的梯度出现差别。

代码地址: https://github.com/miemie2013/stylegan2-ada-pytorch_m2

首先运行test2_03_SynthesisLayer_grad.py,保存pytorch版SynthesisLayer的权重"pytorch_synthesisLayer.pth",跑一次前向,保存输入、输出、输出对输入的梯度、部分层输出对输入的梯度为文件synthesisLayer_grad.npz;
接着,跑test2_03_SynthesisLayer_grad_2paddle.py,建立paddle版SynthesisLayer网络,将"pytorch_synthesisLayer.pth"的权重移植到paddle,保存为文件"pytorch_synthesisLayer.pdparams";
最后,跑test2_03_SynthesisLayer_grad_paddle.py,建立paddle版SynthesisLayer网络,加载"pytorch_synthesisLayer.pdparams"的权重,读取synthesisLayer_grad.npz中保存的输入作为输入,跑一次前向。即此时paddle版SynthesisLayer与pytorch版SynthesisLayer有相同的权重,相同的输入,我们期望paddle版SynthesisLayer与pytorch版SynthesisLayer有相同的输出、相同的梯度。

出现差别的梯度如下:

# paddle_networks2.py文件

class SynthesisLayer(nn.Layer):
    ...
    def forward(self, x, w, dic2, pre_name, noise_mode='random', fused_modconv=True, gain=1):
        assert noise_mode in ['random', 'const', 'none']
        in_resolution = self.resolution // self.up
        styles = self.affine(w)

        dstyles_dw = paddle.grad(outputs=[styles.sum()], inputs=[w], create_graph=True)[0]
        dstyles_dw_paddle = dstyles_dw.numpy()
        dstyles_dw_pytorch = dic2[pre_name + '.dstyles_dw']
        ddd = np.sum((dstyles_dw_pytorch - dstyles_dw_paddle) ** 2)
        print('ddd=%.6f' % ddd)

        noise = None
        if self.use_noise and noise_mode == 'random':
            noise = paddle.randn([x.shape[0], 1, self.resolution, self.resolution]) * self.noise_strength
        if self.use_noise and noise_mode == 'const':
            noise = self.noise_const * self.noise_strength

        flip_weight = (self.up == 1) # slightly faster
        img2 = modulated_conv2d(x=x, weight=self.weight, styles=styles, noise=noise, up=self.up,
                                padding=self.padding, resample_filter=self.resample_filter, flip_weight=flip_weight, fused_modconv=fused_modconv)

        dimg2_dx = paddle.grad(outputs=[img2.sum()], inputs=[x], create_graph=True)[0]
        dimg2_dx_paddle = dimg2_dx.numpy()
        dimg2_dx_pytorch = dic2[pre_name + '.dimg2_dx']
        ddd = np.sum((dimg2_dx_pytorch - dimg2_dx_paddle) ** 2)
        print('ddd=%.6f' % ddd)

        dimg2_dw = paddle.grad(outputs=[img2.sum()], inputs=[w], create_graph=True)[0]
        dimg2_dw_paddle = dimg2_dw.numpy()
        dimg2_dw_pytorch = dic2[pre_name + '.dimg2_dw']
        ddd = np.sum((dimg2_dw_pytorch - dimg2_dw_paddle) ** 2)
        print('dimg2_dw_diff=%.6f' % ddd)

        dimg2_dstyles = paddle.grad(outputs=[img2.sum()], inputs=[styles], create_graph=True)[0]
        dimg2_dstyles_paddle = dimg2_dstyles.numpy()
        dimg2_dstyles_pytorch = dic2[pre_name + '.dimg2_dstyles']
        ddd = np.sum((dimg2_dstyles_pytorch - dimg2_dstyles_paddle) ** 2)
        print('ddd=%.6f' % ddd)

        act_gain = self.act_gain * gain
        act_clamp = self.conv_clamp * gain if self.conv_clamp is not None else None
        img3 = bias_act(img2, paddle.cast(self.bias, dtype=img2.dtype), act=self.activation, gain=act_gain, clamp=act_clamp)
        return img3

dstyles_dw_pytorch与dstyles_dw_paddle是相同的;
dimg2_dx_pytorch与dimg2_dx_paddle是相同的;
dimg2_dw_pytorch与dimg2_dw_paddle是不同的!!!
dimg2_dstyles_pytorch与dimg2_dstyles_paddle是相同的;

核心前向代码是这两句:

styles = self.affine(w)
		...
        img2 = modulated_conv2d(x=x, weight=self.weight, styles=styles, noise=noise, up=self.up,
                                padding=self.padding, resample_filter=self.resample_filter, flip_weight=flip_weight, fused_modconv=fused_modconv)

即img2对styles的导数和pytorch相同,styles对w的导数和pytorch相同,但是,img2对w的导数和pytorch不同!!!这回我真不知道怎么搞了,望得到官方帮助。

db2dz4w8

db2dz4w85#

目前是反向对齐出现的问题,建议用二分的方法查找diff,定位出PaddlePaddle与PyTorch梯度无法对齐的API或者操作,然后进一步验证并反馈。 相关文档可以参考这个: https://github.com/PaddlePaddle/models/blob/release/2.2/docs/lwfx/ArticleReproduction_CV.md#3.8

已经定位到更细粒度的差别,为SynthesisLayer中的梯度出现差别。

代码地址: https://github.com/miemie2013/stylegan2-ada-pytorch_m2

首先运行test2_03_SynthesisLayer_grad.py,保存pytorch版SynthesisLayer的权重"pytorch_synthesisLayer.pth",跑一次前向,保存输入、输出、输出对输入的梯度、部分层输出对输入的梯度为文件synthesisLayer_grad.npz; 接着,跑test2_03_SynthesisLayer_grad_2paddle.py,建立paddle版SynthesisLayer网络,将"pytorch_synthesisLayer.pth"的权重移植到paddle,保存为文件"pytorch_synthesisLayer.pdparams"; 最后,跑test2_03_SynthesisLayer_grad_paddle.py,建立paddle版SynthesisLayer网络,加载"pytorch_synthesisLayer.pdparams"的权重,读取synthesisLayer_grad.npz中保存的输入作为输入,跑一次前向。即此时paddle版SynthesisLayer与pytorch版SynthesisLayer有相同的权重,相同的输入,我们期望paddle版SynthesisLayer与pytorch版SynthesisLayer有相同的输出、相同的梯度。

出现差别的梯度如下:

# paddle_networks2.py文件

class SynthesisLayer(nn.Layer):
    ...
    def forward(self, x, w, dic2, pre_name, noise_mode='random', fused_modconv=True, gain=1):
        assert noise_mode in ['random', 'const', 'none']
        in_resolution = self.resolution // self.up
        styles = self.affine(w)

        dstyles_dw = paddle.grad(outputs=[styles.sum()], inputs=[w], create_graph=True)[0]
        dstyles_dw_paddle = dstyles_dw.numpy()
        dstyles_dw_pytorch = dic2[pre_name + '.dstyles_dw']
        ddd = np.sum((dstyles_dw_pytorch - dstyles_dw_paddle) ** 2)
        print('ddd=%.6f' % ddd)

        noise = None
        if self.use_noise and noise_mode == 'random':
            noise = paddle.randn([x.shape[0], 1, self.resolution, self.resolution]) * self.noise_strength
        if self.use_noise and noise_mode == 'const':
            noise = self.noise_const * self.noise_strength

        flip_weight = (self.up == 1) # slightly faster
        img2 = modulated_conv2d(x=x, weight=self.weight, styles=styles, noise=noise, up=self.up,
                                padding=self.padding, resample_filter=self.resample_filter, flip_weight=flip_weight, fused_modconv=fused_modconv)

        dimg2_dx = paddle.grad(outputs=[img2.sum()], inputs=[x], create_graph=True)[0]
        dimg2_dx_paddle = dimg2_dx.numpy()
        dimg2_dx_pytorch = dic2[pre_name + '.dimg2_dx']
        ddd = np.sum((dimg2_dx_pytorch - dimg2_dx_paddle) ** 2)
        print('ddd=%.6f' % ddd)

        dimg2_dw = paddle.grad(outputs=[img2.sum()], inputs=[w], create_graph=True)[0]
        dimg2_dw_paddle = dimg2_dw.numpy()
        dimg2_dw_pytorch = dic2[pre_name + '.dimg2_dw']
        ddd = np.sum((dimg2_dw_pytorch - dimg2_dw_paddle) ** 2)
        print('dimg2_dw_diff=%.6f' % ddd)

        dimg2_dstyles = paddle.grad(outputs=[img2.sum()], inputs=[styles], create_graph=True)[0]
        dimg2_dstyles_paddle = dimg2_dstyles.numpy()
        dimg2_dstyles_pytorch = dic2[pre_name + '.dimg2_dstyles']
        ddd = np.sum((dimg2_dstyles_pytorch - dimg2_dstyles_paddle) ** 2)
        print('ddd=%.6f' % ddd)

        act_gain = self.act_gain * gain
        act_clamp = self.conv_clamp * gain if self.conv_clamp is not None else None
        img3 = bias_act(img2, paddle.cast(self.bias, dtype=img2.dtype), act=self.activation, gain=act_gain, clamp=act_clamp)
        return img3

dstyles_dw_pytorch与dstyles_dw_paddle是相同的; dimg2_dx_pytorch与dimg2_dx_paddle是相同的; dimg2_dw_pytorch与dimg2_dw_paddle是不同的!!! dimg2_dstyles_pytorch与dimg2_dstyles_paddle是相同的;

核心前向代码是这两句:

styles = self.affine(w)
		...
        img2 = modulated_conv2d(x=x, weight=self.weight, styles=styles, noise=noise, up=self.up,
                                padding=self.padding, resample_filter=self.resample_filter, flip_weight=flip_weight, fused_modconv=fused_modconv)

即img2对styles的导数和pytorch相同,styles对w的导数和pytorch相同,但是,img2对w的导数和pytorch不同!!!这回我真不知道怎么搞了,望得到官方帮助。

同学,您好,我在test2_03_SynthesisLayer_grad_paddle.py文件中将
dy_dx = paddle.grad(outputs=[y.sum()], inputs=[x], create_graph=False, retain_graph=True)[0]
dy_dws = paddle.grad(outputs=[y.sum()], inputs=[ws], create_graph=False, retain_graph=True)[0]
dy_dx_paddle = dy_dx.numpy()
dy_dws_paddle = dy_dws.numpy()
改为
y_sum = y.sum()
y_sum.backward(retain_graph=True)
dy_dx_paddle = x.gradient()
dy_dws_paddle = ws.gradient()
这样dy_dws_paddle的梯度是可以对齐的,你可以先使用backward。
感谢您的反馈,paddle.grad的问题后续会排期进行修复。

x9ybnkn6

x9ybnkn66#

目前是反向对齐出现的问题,建议用二分的方法查找diff,定位出PaddlePaddle与PyTorch梯度无法对齐的API或者操作,然后进一步验证并反馈。 相关文档可以参考这个: https://github.com/PaddlePaddle/models/blob/release/2.2/docs/lwfx/ArticleReproduction_CV.md#3.8

已经定位到更细粒度的差别,为SynthesisLayer中的梯度出现差别。
代码地址: https://github.com/miemie2013/stylegan2-ada-pytorch_m2
首先运行test2_03_SynthesisLayer_grad.py,保存pytorch版SynthesisLayer的权重"pytorch_synthesisLayer.pth",跑一次前向,保存输入、输出、输出对输入的梯度、部分层输出对输入的梯度为文件synthesisLayer_grad.npz; 接着,跑test2_03_SynthesisLayer_grad_2paddle.py,建立paddle版SynthesisLayer网络,将"pytorch_synthesisLayer.pth"的权重移植到paddle,保存为文件"pytorch_synthesisLayer.pdparams"; 最后,跑test2_03_SynthesisLayer_grad_paddle.py,建立paddle版SynthesisLayer网络,加载"pytorch_synthesisLayer.pdparams"的权重,读取synthesisLayer_grad.npz中保存的输入作为输入,跑一次前向。即此时paddle版SynthesisLayer与pytorch版SynthesisLayer有相同的权重,相同的输入,我们期望paddle版SynthesisLayer与pytorch版SynthesisLayer有相同的输出、相同的梯度。
出现差别的梯度如下:

# paddle_networks2.py文件

class SynthesisLayer(nn.Layer):
    ...
    def forward(self, x, w, dic2, pre_name, noise_mode='random', fused_modconv=True, gain=1):
        assert noise_mode in ['random', 'const', 'none']
        in_resolution = self.resolution // self.up
        styles = self.affine(w)

        dstyles_dw = paddle.grad(outputs=[styles.sum()], inputs=[w], create_graph=True)[0]
        dstyles_dw_paddle = dstyles_dw.numpy()
        dstyles_dw_pytorch = dic2[pre_name + '.dstyles_dw']
        ddd = np.sum((dstyles_dw_pytorch - dstyles_dw_paddle) ** 2)
        print('ddd=%.6f' % ddd)

        noise = None
        if self.use_noise and noise_mode == 'random':
            noise = paddle.randn([x.shape[0], 1, self.resolution, self.resolution]) * self.noise_strength
        if self.use_noise and noise_mode == 'const':
            noise = self.noise_const * self.noise_strength

        flip_weight = (self.up == 1) # slightly faster
        img2 = modulated_conv2d(x=x, weight=self.weight, styles=styles, noise=noise, up=self.up,
                                padding=self.padding, resample_filter=self.resample_filter, flip_weight=flip_weight, fused_modconv=fused_modconv)

        dimg2_dx = paddle.grad(outputs=[img2.sum()], inputs=[x], create_graph=True)[0]
        dimg2_dx_paddle = dimg2_dx.numpy()
        dimg2_dx_pytorch = dic2[pre_name + '.dimg2_dx']
        ddd = np.sum((dimg2_dx_pytorch - dimg2_dx_paddle) ** 2)
        print('ddd=%.6f' % ddd)

        dimg2_dw = paddle.grad(outputs=[img2.sum()], inputs=[w], create_graph=True)[0]
        dimg2_dw_paddle = dimg2_dw.numpy()
        dimg2_dw_pytorch = dic2[pre_name + '.dimg2_dw']
        ddd = np.sum((dimg2_dw_pytorch - dimg2_dw_paddle) ** 2)
        print('dimg2_dw_diff=%.6f' % ddd)

        dimg2_dstyles = paddle.grad(outputs=[img2.sum()], inputs=[styles], create_graph=True)[0]
        dimg2_dstyles_paddle = dimg2_dstyles.numpy()
        dimg2_dstyles_pytorch = dic2[pre_name + '.dimg2_dstyles']
        ddd = np.sum((dimg2_dstyles_pytorch - dimg2_dstyles_paddle) ** 2)
        print('ddd=%.6f' % ddd)

        act_gain = self.act_gain * gain
        act_clamp = self.conv_clamp * gain if self.conv_clamp is not None else None
        img3 = bias_act(img2, paddle.cast(self.bias, dtype=img2.dtype), act=self.activation, gain=act_gain, clamp=act_clamp)
        return img3

dstyles_dw_pytorch与dstyles_dw_paddle是相同的; dimg2_dx_pytorch与dimg2_dx_paddle是相同的; dimg2_dw_pytorch与dimg2_dw_paddle是不同的!!! dimg2_dstyles_pytorch与dimg2_dstyles_paddle是相同的;
核心前向代码是这两句:

styles = self.affine(w)
		...
        img2 = modulated_conv2d(x=x, weight=self.weight, styles=styles, noise=noise, up=self.up,
                                padding=self.padding, resample_filter=self.resample_filter, flip_weight=flip_weight, fused_modconv=fused_modconv)

即img2对styles的导数和pytorch相同,styles对w的导数和pytorch相同,但是,img2对w的导数和pytorch不同!!!这回我真不知道怎么搞了,望得到官方帮助。

同学,您好,我在test2_03_SynthesisLayer_grad_paddle.py文件中将 dy_dx = paddle.grad(outputs=[y.sum()], inputs=[x], create_graph=False, retain_graph=True)[0] dy_dws = paddle.grad(outputs=[y.sum()], inputs=[ws], create_graph=False, retain_graph=True)[0] dy_dx_paddle = dy_dx.numpy() dy_dws_paddle = dy_dws.numpy() 改为 y_sum = y.sum() y_sum.backward(retain_graph=True) dy_dx_paddle = x.gradient() dy_dws_paddle = ws.gradient() 这样dy_dws_paddle的梯度是可以对齐的,你可以先使用backward。 感谢您的反馈,paddle.grad的问题后续会排期进行修复。

只拿到numpy()数组没有用呀,还是需要梯度张量进行损失的计算,希望官方能尽快修复paddle.grad()的问题。

相关问题