numpy.add.at 比就地添加慢?

dl5txlt9  于 2023-05-07  发布在  其他
关注(0)|答案(2)|浏览(178)

从我的一个earlier posts开始,我注意到操作np.add.at(A, indices, B)A[indices] += B慢得多。

def fast(A):
    n = A.shape[0]
    retval = np.zeros(2*n-1)
    for i in range(n):
        retval[slice(i,i+n)] += A[i, :]
    return retval
def slow(A):
    n = A.shape[0]
    retval = np.zeros(2*n-1)
    for i in range(n):
        np.add.at(retval, slice(i,i+n), A[i, :])
    return retval
def slower(A):
    n = A.shape[0]
    retval = np.zeros(2*n-1)
    indices = np.arange(n)
    indices = indices[:,None] + indices
    np.add.at(retval, indices, A) # bottleneck here
    return retval

我的计时:

A = np.random.randn(10000, 10000)

timeit(lambda: fast(A), number=10) # 0.8852798199995959
timeit(lambda: slow(A), number=10) # 56.633683917999406
timeit(lambda: slower(A), number=10) # 57.763389584000834

显然,使用__iadd__要快得多。但是,np.add.at的文档指出:
对由“indices”指定的元素的操作数“a”执行无缓冲到位操作。对于加法ufunc,此方法等效于a[indices] += B,除了为索引超过一次的元素累积结果。
为什么np.add.at这么慢?
此功能的用例是什么?

daolsyd0

daolsyd01#

add.at用于索引包含重复项且+=未生成所需结果的情况

In [44]: A = np.zeros(5,int); idx = np.array([0,1,1,2,2,2,3,3,3,3])
In [45]: A[idx]+=1
In [46]: A
Out[46]: array([1, 1, 1, 1, 0])    # the duplicates in idx are ignored

add.at

In [47]: A = np.zeros(5,int); idx = np.array([0,1,1,2,2,2,3,3,3,3])
In [48]: np.add.at(A, idx, 1)
In [49]: A
Out[49]: array([1, 2, 3, 4, 0])

与显式迭代的结果相同:

In [50]: A = np.zeros(5,int); idx = np.array([0,1,1,2,2,2,3,3,3,3])
In [51]: for i in idx: A[i]+=1
In [52]: A
Out[52]: array([1, 2, 3, 4, 0])

一些时间:

In [53]: %%timeit A = np.zeros(5,int); idx = np.array([0,1,1,2,2,2,3,3,3,3])
    ...: A[idx]+=1
3.65 µs ± 13.2 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)

In [54]: %%timeit A = np.zeros(5,int); idx = np.array([0,1,1,2,2,2,3,3,3,3])
    ...: np.add.at(A, idx, 1)
6.47 µs ± 24.8 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)

In [55]: %%timeit A = np.zeros(5,int); idx = np.array([0,1,1,2,2,2,3,3,3,3])
    ...: np.add.at(A, idx, 1)
    ...: for i in idx: A[i]+=1
15.6 µs ± 41.5 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)

add.at+=慢,但比python迭代好。
我们可以测试A[:4]+1A[:4]+=1等变体。
无论如何,如果你不需要,不要使用add.at变体。

编辑

你的例子,简化了一点:

In [108]: x = np.zeros(2*10-1)
     ...: for i in range(10):
     ...:     x[i:i+10] += 1
     ...: 
In [109]: x
Out[109]: 
array([ 1.,  2.,  3.,  4.,  5.,  6.,  7.,  8.,  9., 10.,  9.,  8.,  7.,
        6.,  5.,  4.,  3.,  2.,  1.])

因此,您正在向重叠切片添加值。我们可以用数组替换切片:

In [110]: x = np.zeros(2*10-1)
     ...: for i in range(10):
     ...:     x[np.arange(i,i+10)] += 1
     ...: 
In [111]: x
Out[111]: 
array([ 1.,  2.,  3.,  4.,  5.,  6.,  7.,  8.,  9., 10.,  9.,  8.,  7.,
        6.,  5.,  4.,  3.,  2.,  1.])

不需要添加您的'慢'情况,add.at与切片,因为索引没有重复。
创建所有索引。+=由于缓冲而无法工作

In [112]: idx=np.arange(10); idx=(idx[:,None]+idx).ravel()
In [113]: y=np.zeros(2*10-1)
     ...: y[idx]+=1
In [114]: y
Out[114]: 
array([1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.,
       1., 1.])

add.at解决了:

In [115]: y=np.zeros(2*10-1)
     ...: np.add.at(y, idx, 1)
In [116]: y
Out[116]: 
array([ 1.,  2.,  3.,  4.,  5.,  6.,  7.,  8.,  9., 10.,  9.,  8.,  7.,
        6.,  5.,  4.,  3.,  2.,  1.])

完整的Python迭代:

In [117]: y=np.zeros(2*10-1)
     ...: for i in idx: y[i]+=1
In [118]: 
In [118]: y
Out[118]: 
array([ 1.,  2.,  3.,  4.,  5.,  6.,  7.,  8.,  9., 10.,  9.,  8.,  7.,
        6.,  5.,  4.,  3.,  2.,  1.])

现在一些时间。
基线:

In [119]: %%timeit
     ...: x = np.zeros(2*10-1)
     ...: for i in range(10):
     ...:     x[i:i+10] += 1
     ...: 
50.5 µs ± 177 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)

高级索引会减慢一些:

In [120]: %%timeit
     ...: x = np.zeros(2*10-1)
     ...: for i in range(10):
     ...:     x[np.arange(i,i+10)] += 1
     ...: 
75.2 µs ± 79.9 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)

如果它工作,一个高级索引+=将是最快的:

In [121]: %%timeit
     ...: idx=np.arange(10); idx=(idx[:,None]+idx).ravel()
     ...: y=np.zeros(2*10-1)
     ...: y[idx]+=1
     ...: 
     ...: 
17.5 µs ± 693 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)

完整的python迭代与循环数组的情况大致相同:

In [122]: %%timeit
     ...: idx=np.arange(10); idx=(idx[:,None]+idx).ravel()
     ...: y=np.zeros(2*10-1)
     ...: for i in idx: y[i]+=1
     ...: 
     ...: 
76.3 µs ± 2.51 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)

add.at比平坦的+=慢,但仍然比基线好:

In [123]: %%timeit
     ...: idx=np.arange(10); idx=(idx[:,None]+idx).ravel()
     ...: y=np.zeros(2*10-1)
     ...: np.add.at(y, idx,1)
     ...: 
     ...: 
29.4 µs ± 21.2 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)

在我的小型测试中,slower的性能最好。但它可能无法像base/fast那样扩展。你的indices要大得多。通常对于非常大的数组,由于减少了内存管理负载,一维上的迭代速度更快。

aij0ehis

aij0ehis2#

我也有np.add.at速度慢的问题。最后,我基于排序编写了自己的版本:

def add_at(A, indices, B):
    sorted_indices = np.argsort(indices)
    uniques, run_lengths = np.unique(indices[sorted_indices], return_counts=True)
    for i, length, end in zip(uniques, run_lengths, run_lengths.cumsum()):
        A[i] += B[sorted_indices[end-length:end]].sum(axis=0)

对于小型数组,这比np.add.at慢,但对于大型数组,它快20倍或更多。
小基准:

n, m, d = 5, 10, 3
A = np.zeros((n, d))
B = np.random.randn(m, d)
indices = np.random.randint(n, size=m)

%timeit np.add.at(A, indices, B)
7.6 µs ± 16 ns per loop (mean ± std. dev. of 7 runs, 100,000 loops each)
%timeit add_at(A, indices, B)
82.9 µs ± 2.2 µs per loop (mean ± std. dev. of 7 runs, 10,000 loops each)

大型基准:

n, m, d = 10**3, 10**6, 10**2
...
%timeit np.add.at(A, indices, B)
10.2 s ± 42.8 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
%timeit add_at(A, indices, B)
509 ms ± 50.6 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)

还有一种常见的模式,它比np.add.at更简单更快,尽管它仍然比排序方法慢:

def add_at_ind(A, indices, B):
    for i in np.unique(indices):
        A[i] += B[indices == i].sum(axis=0)
# Small
%timeit add_at_ind(A, indices, B)
56.1 µs ± 1.28 µs per loop (mean ± std. dev. of 7 runs, 10,000 loops each)
# Large
%timeit add_at_ind(A, indices, B)
3.3 s ± 101 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)

相关问题