无法在Python中重现基于Matlab的结果

bvn4nwqk  于 2022-12-13  发布在  Matlab
关注(0)|答案(1)|浏览(188)

我曾尝试用Python实现Lindner(2012)的Matlab脚本。然而,我的Python脚本中的最终结果D与我能够在在线Matlab环境中生成的结果不同(见下图)。我在两个脚本中都运行了rand('twister', 1337),以使随机数可预测。
直到最后一步Gram-Schmidt algorithm之前,一切看起来都工作正常(就我所见,变量的值是一样的)。然而,D是不同的。有人能发现我的错误吗?
Lindner,Sören,Julien Legault,and Dabo Guan. 2012.“不完全信息下分解投入产出模型”.《经济系统研究》24(4):第329-47页。
Matlab脚本可通过以下方式获得:https://www.tandfonline.com/doi/suppl/10.1080/09535314.2012.689954
Matlab输出(第一行和第一列)-权威:

分散的Python输出(第一行和第一列):

"""Implementation of Lindner (2012) in Python with NumPy and Pandas.

Lindner, Sören, Julien Legault, and Dabo Guan. 2012.
‘Disaggregating Input–Output Models with Incomplete Information’.
Economic Systems Research 24 (4): 329–47.
https://doi.org/10.1080/09535314.2012.689954.

The comments in this script contain the Matlab code given in the supplementary
material 'cesr_a_689954_sup_27358897.docx' of Lindner (2012).

Source (accessed 06.12.2022):
https://www.tandfonline.com/doi/suppl/10.1080/09535314.2012.689954

The script contains one aspect of randomness. A random vector is
generated in line 90 of the Matlab script: `base(p,:) = rand(1,Nv)`. For verification purposes, `np.random.seed(1337)` (Python) and `rand('twister', 1337)` (Matlab) was applied.

"""

import numpy as np
import pandas as pd

from tqdm import tqdm

if True:

    # Switch flag for verification
    # Matlab equivalent: `rand('twister', 1337)`
    # Source: https://stackoverflow.com/a/20202330/5696601

    np.random.seed(1337)

# %% Loading data

# load('IOT_China.mat'); %Loading China's IO table

flows = pd.read_csv(
    # Input–output table of China (2007), in billion RMB
    'io-table-cn-2007-flows.csv',
    header=None
    )

flows_idx = pd.read_csv(
    'io-table-cn-2007-flows-idx.csv'
    )

flows.columns = pd.MultiIndex.from_frame(flows_idx)
flows.index = pd.MultiIndex.from_frame(flows_idx.iloc[:12, :])

# f = IOT_national(:,end-1); %Vector of final demand
f = flows.loc[:, ('Final demand', 'FD')]

# id = IOT_national(:,end-2); %Vector of intermediate demand
id = flows.loc[:, ('Intermediate demand', 'ID')]

# x = IOT_national(:,end); %Vector of total outputs
x = f + id

# Z = IOT_national(:,1:end-3); %Exchange matrix
Z = flows.loc[
    # Rows
    :,
    # Cols
    (~flows.columns.get_level_values('Cat')
     .isin(['ID', 'FD', 'TO']))
    ]

del flows_idx

# temp = size(Z); %Size of IO table
temp = Z.shape

# N = temp(1)-1; %Number of common sectors
N = temp[0] - 1

# A = Z./repmat(transpose(x),N+1,1); %Aggregated technical coefficient matrix
A = np.divide(Z, x)

# x_common = x(1:end-1); %Vector of total outputs for common sectors
x_common = x[:-1]

# f_common = f(1:end-1); %Vector of final demand for common sectors
f_common = f[:-1]

# Note: The last sector of the table is disaggregated,
# i.e. the electricity sector

# x_elec = x(end); %Total output of the disaggregated sector
x_elec = x[-1]

# f_elec = f(end); %Final demand of the disaggregated sector
f_elec = f[-1]

# %% Newly formed sectors from the electricity sector
# n = 3; %Number of new sectors
# w = [0.241;0.648;0.111]; %New sector weights

w = pd.read_csv(
    'io-table-cn-2007-w.csv',
    header=None
    )

w = w.values.flatten()

w_idx = pd.read_csv(
    'io-table-cn-2007-w-idx.csv'
    )

n = len(w)

# N_tot = N + n; %Total number of sectors for the disaggregated IO table
N_tot = N + n

# x_new = w.*x_elec; %Vector of new total sector outputs
x_new = w*x_elec/1000

# xs = [x_common;x_new]; %Vector of disaggregated economy sector total outputs
xs = np.concatenate((x_common, x_new))

# f_new = w*f_elec; %Final demand of new sectors
f_new = w*f_elec

# %% Building the constraint matrix C

# Nv = n*N_tot + n; %Number of variables
Nv = n * N_tot + n

# Nc = N + n + 1; %Number of constraints
Nc = N + n + 1

# q = [transpose(A(N+1,:));w]; %Vector of constraint constants
q = pd.concat(
    [A.iloc[N, :],
     pd.Series(w, index=pd.MultiIndex.from_frame(w_idx))]
    )

# C = zeros(Nc,Nv); %Matrix of constraints
C = np.zeros((Nc, Nv))

# %% Common sectors constraints

# C11 = zeros(N,N*n);
# for ii = 1:N
#     col_indices = n*(ii-1)+1:n*ii;
#     C11(ii,col_indices) = ones(1,n);
# end
# C(1:N,1:N*n) = C11;

C11 = np.zeros((N, N*n))

for ii in range(N):
    col_indices = range(n*(ii), n*ii+n)
    C11[ii, col_indices] = np.ones((1, n))

C[:N, :N*n] = C11

# %% New sectors constraints

# C22 = zeros(1,n^2);
# for ii = 1:n
#     col_indices = n*(ii-1)+1:n*ii;
#     C22(1,col_indices) = w(ii)*ones(1,n);
# end
# C(N+1,N*n+1:N*n+n^2) = C22;

C22 = np.zeros((1, n**2))

for ii in range(0, n):
    col_indices = range(n*(ii), n*ii+n)
    C22[0, col_indices] = w[ii]*np.ones((1, n))

C[N, N*n:N*n+n**2] = C22

# %% Final demand constraints

# C31 = zeros(n,N*n);
# for ii = 1:N
#     col_indices = n*(ii-1)+1:n*ii;
#     C31(1:n,col_indices) = (x_common(ii)/x_elec)*eye(n,n);
# end
# C32 = zeros(n,n^2);
# for ii = 1:n
#     col_indices = n*(ii-1)+1:n*ii;
#     C32(1:n,col_indices) = w(ii)*eye(n,n);
# end
# C(N+2:end,1:N*n) = C31;
# C(N+2:end,N*n+1:N*n+n^2) = C32;
# C(N+2:end,N*n+n^2+1:end) = eye(n,n);

C31 = np.zeros((n, N*n))

for ii in range(N):
    col_indices = range(n*(ii-1)+3, n*ii+3)
    C31[:n, col_indices] = (x_common[ii]/x_elec)*np.eye(n)

C32 = np.zeros((n, n**2))

for ii in range(0, n):
    col_indices = range(n*(ii-1)+3, n*ii+3)
    C32[:n, col_indices] = w[ii]*np.eye(n)

C[N+1:, :N*n] = C31
C[N+1:, N*n:N*n+n**2] = C32
C[N+1:, N*n+n**2:] = np.eye(n)

# %% Building the initial estimate y0

# Technical coefficient matrix of the initial estimate
# As_y0 = zeros(N_tot,N_tot);
# As_y0(1:N,1:N) = A(1:N,1:N); %Common/Common part
# As_y0(1:N,N+1:N_tot) = repmat(A(1:N,N+1),1,n); %Common/New part
# As_y0(N+1:N_tot,1:N) = w*A(N+1,1:N); %New/Common part
# As_y0(N+1:N_tot,N+1:N_tot) = A(N+1,N+1)*repmat(w,1,n); %New/New part

As_y0 = np.zeros((N_tot, N_tot))

As_y0[:N, :N] = A.iloc[:N, :N]

As_y0[:N, N:N_tot] = np.repeat(A.iloc[:N, N].to_numpy(), n).reshape(N, n)

As_y0[N:N_tot, :N] = (
    np.multiply(w, A.iloc[N, :N].to_numpy().repeat(n).reshape(N, n)).T
    )

As_y0[N:N_tot, N:N_tot] = np.multiply(
    A.iloc[N, N],
    np.repeat(w, n).reshape(n, n)
    )

# %% Generating the orthogonal distinguishing matrix

# %%% Making the constraint matrix orthogonal

# C_orth = C;
# for c = 1:Nc
#     for i = 1:c-1
#         C_orth(c,:) = C_orth(c,:) - dot(C_orth(c,:),C_orth(i,:))/norm(C_orth(i,:))^2*C_orth(i,:); %Orthogonal projection
#     end
# end

C_orth = C.copy()

for c in tqdm(range(Nc), desc='Orthogonalize constraint matrix'):
    for i in range(c):
        C_orth[c, :] = (
            C_orth[c, :]
            - np.dot(C_orth[c, :], C_orth[i, :])
            / np.linalg.norm(C_orth[i, :])**2 * C_orth[i, :]
            )

# %%% Gram-Schmidt algorithm

# base = zeros(Nv,Nv); %Orthogonal base containing C_orth and D
# base(1:Nc,:) = C_orth;
# for p = Nc+1:Nv
#     base(p,:) = rand(1,Nv); %Generate random vector
#     for i=1:p-1
#         base(p,:) = base(p,:) - dot(base(p,:),base(i,:))/norm(base(i,:))^2*base(i,:); %Orthogonal projection on previous vectors
#     end
#     base(p,:) = base(p,:)/norm(base(p,:)); %Normalizing
# end
# D = transpose(base(Nc+1:end,:)); %Retrieving the distinguishing matrix from the orthogonal base

base = np.zeros((Nv, Nv))
base[:Nc, :] = C_orth.copy()

for p in tqdm(range(Nc, Nv), desc='Gram-Schmidt algorithm'):

    base[p, :] = np.random.rand(1, Nv)

    for i in range(p-1):

        base[p, :] = (
            base[p, :]
            - np.dot(base[p, :], base[i, :])
            / np.linalg.norm(base[i, :])**2 * base[i, :]
            )

    base[p, :] = base[p, :] / np.linalg.norm(base[p, :])

D = base[Nc:, :].T

io-table-cn-2007-flows.csv

687.7,7,0.8,2223.1,0,167.6,0.7,66.4,0,25.9,255,0,3434.2,1420.5,4854.7
2.7,97,5.7,37.1,112,193.5,122.7,22.7,7.1,5.7,25.5,330.2,961.9,41.4,1003.3
0.6,1.3,114.8,11,1189.4,442.2,933.4,29.3,55.7,83.5,17.5,36.8,2915.5,62.3,2977.8
482.2,15.7,25,3813.9,15.8,326.7,98.6,370.1,3.3,171.3,1368.1,27.5,6718.2,4675.6,11393.8
39.4,13.6,89.2,46.2,121.4,463,298.4,83.7,3.4,126.7,771.3,127.5,2183.8,145.5,2329.3
379.8,27.1,122.8,885.2,48,3176.6,250.9,1098.6,7.4,1579,758.9,15.5,8349.8,1189.9,9539.7
14.6,69.3,86.6,136.6,10.3,228.8,2972.3,2684.5,4.7,1208.8,109.4,17.3,7543.2,1085.9,8629.1
58.6,98,197.2,307.8,50.1,339.4,683.5,6359,8.4,531.9,1331.4,295,10260.3,8754.1,19014.4
1.1,1.7,9.2,17.6,4.9,29.8,17.8,17.7,9.5,3,40.1,9.3,161.7,64.9,226.6
1.1,1.3,1.4,2.6,1.2,2.7,2.1,3.5,0.2,59.8,123.1,1,200,6018.7,6218.7
309.7,129.5,189,917.1,130.9,787.8,570.3,1366.1,27.1,942.5,3873.2,278.2,9521.4,10119.7,19641.1
45.8,60.2,174.7,171,48.3,436.4,367.9,214.1,25,82.7,276.1,1129.4,3031.6,241.8,3273.4

io-table-cn-2007-flows-idx.csv

Category,Cat
Agriculture,Ag
Coal minin and processing,CmP
Petroleaum processing and natural gas products,Pp
Food manufacturing and tobacco products,Fm
Petroleaum processing and coking,Ppc
Chemicals,Ch
Metal smelting and pressing,Msp
Machinery and equipment,M+e
Gas production and distribution,Gp+d
Construction,Co
Transport and warehousing,T+w
Electricity production and distribution,Ep+d
Intermediate demand,ID
Final demand,FD
Total output,TO

io-table-cn-2007-w.csv

0.241
0.648
0.111

io-table-cn-2007-w-idx.csv

Category,Cat
Hydro-electricity and others,Hy
Subcritical coal,SubC
Other fossil fuels,OFF
zxlwwiss

zxlwwiss1#

上面的Gram-Schmidt算法有一些小问题。* 注意,我只检查了你提到的:*
直到Gram-Schmidt算法的最后一步,一切都看起来工作正常(就我所见,变量的值是相同的)。
首先,在外部for循环中,您从Nc -> Nv开始运行,这意味着您的基址的pth行中的随机向量不会正交化-Matlab脚本也会运行Nc+1:Nv
第二,(使用for循环得到):你可以从0运行到p,因为pth向量在ith向量上的投影是相同的 (无论i是在0和p-1之间还是在1和p-1之间)
此外,我通过添加一些语法糖(-=/=)来缩短代码--但除此之外,您的Gram-Schmidt实现与Lidner 2012论文中提出的相同。

# Orth. base containing both C_orth and D
base = np.zeros((Nv, Nv))

# C_orth is placed in the first :Nc rows of the base from above (c.f. Matlab code)
base[:Nc, :] = C_orth.copy()

# Generate random vectors for remaining rows
for p in range(Nc+1, Nv):
    # Random vector
    base[p, :] = np.random.rand(1, Nv)

    # Orthogonal projection on previous vectors
    for i in range(p):
        # Subtract the projection of the pth vector on the ith vector
        # from the pth vector - as described in the Paper by:
        # base(p,:) = base(p,:) 
        # - dot(base(p,:),base(i,:))/norm(base(i,:))^2*base(i,:);
        # Besides the syntax, it's the exact replication! 
        base[p, :] -= np.dot(base[p, :], base[i, :]) / np.linalg.norm(base[i, :])**2 * base[i, :]

    # Normalize vector
    base[p, :] /= np.linalg.norm(base[p, :])

# Retrieve matrix from the orthogonal base
D = base[Nc:, :].T

关于为什么您的结果也可能不同,我想提一件事:你可能使用了一个不同的随机数生成器比在文件-〉你生成不同的随机向量!

相关问题