numpy RAY Python框架内存不足

s4n0splo  于 2023-01-09  发布在  Python
关注(0)|答案(2)|浏览(183)

我用Ray创建了一个简单的远程函数,它占用的内存很少。但是,在运行了一小段时间后,内存稳步增加,我得到了RayOutOfMemoryError异常。
下面的代码是这个问题的一个非常简单的例子。“result_transformed”numpy数组被发送给工作者,每个工作者都可以处理这个数组。我的简化calc_similarity函数什么也不做,但是它仍然会耗尽内存。我已经为这个方法添加了更长的睡眠时间来模拟做更多的工作,但是它最终会耗尽内存。
我运行在一个8核英特尔9900 K与32 GB的内存和Ubuntu 19.10 Python是:英特尔Python发行版3.7.4 numpy为1.17.4(带英特尔mkl)

import numpy as np
from time import sleep
import ray
import psutil

@ray.remote
def calc_similarity(sims, offset):
    # Fake some work for 100 ms.
    sleep(0.10)
    return True

if __name__ == "__main__":
    # Initialize RAY to use all of the processors.
    num_cpus = psutil.cpu_count(logical=False)
    ray.init(num_cpus=num_cpus)

    num_docs = 1000000
    num_dimensions = 300
    chunk_size = 128
    sim_pct = 0.82

    # Initialize the array
    index = np.random.random((num_docs, num_dimensions)).astype(dtype=np.float32)
    index_array = np.arange(num_docs).reshape(1, num_docs)
    index_array_id = ray.put(index_array)

    calc_results = []

    for count, start_doc_no in enumerate(range(0, num_docs, chunk_size)):
        size = min( chunk_size, num_docs - (start_doc_no) + 1 )
        # Get the query vector out of the index.
        query_vector = index[start_doc_no:start_doc_no+size]
        # Calculate the matrix multiplication.
        result_transformed = np.matmul(index, query_vector.T).T
        # Serialize the result matrix out for each client.
        result_id = ray.put(result_transformed)

        # Simulate multi-threading extracting the results of a cosine similarity calculation
        for offset in range(chunk_size):
            calc_results.append(calc_similarity.remote(sims=result_id, offset=offset ))
            # , index_array=index_array_id))
        res = ray.get(calc_results)
        calc_results.clear()

如有任何帮助/指导,我们将不胜感激。

flseospp

flseospp1#

谢谢你的回复。
问题是GC没有运行,因为在我用完32 GB系统上的内存之前,没有达到默认的阈值。
对ray.put(transformed_result)的调用可以占用相当大的内存(在本例中为128 x 1,000,000),或者使用float 32时占用大约0.5 GB的内存。
为了解决这个问题,我创建了一个方法,该方法执行以下操作,我可以在其中传入内存使用百分比阈值,并强制调用垃圾收集:

def auto_garbage_collect(pct=80.0):
    if psutil.virtual_memory().percent >= pct:
        gc.collect()

在我的核心处理循环中频繁调用这个函数可以解决内存不足的问题。
这种情况也可以通过修改垃圾收集中的阈值设置来解决。

gc.set_threshold()

这是非常依赖于任务的,并且取决于所使用的数据对象的大小,所以我觉得第一种方法是更好的选择。
谢谢你的详细回复!非常有帮助和启发。

kq0g1dla

kq0g1dla2#

目前,Ray部分支持引用计数。(完整的引用计数即将发布)简单地说,当传递给远程函数的object_id没有序列化时,它的引用计数方式与Python的引用计数方式相同,这意味着如果result_transformed被Python垃圾收集,那么plasma存储中的result_transformed应该被解钉,当对象被LRU逐出时,它应该被逐出(为了清楚起见,具有一些引用计数的钉住对象不被逐出)。
我还假设存在一些奇怪的引用计数,例如循环引用。我可以验证在运行此脚本时result_transformed被逐出。因此,我猜result_transformed本身不是问题。可能存在许多问题。对于我的情况,我发现当我使用ipython作为输入时,它会创建对python对象的引用(IN).(例如,当你看到某个对象的值时,OUT[number]可以引用你的对象)。

In [2]: import psutil 
   ...: import gc 
   ...: import ray 
   ...: from time import sleep 
   ...: import numpy as np 
   ...: @ray.remote 
   ...: def calc_similarity(sims, offset): 
   ...:     # Fake some work for 100 ms. 
   ...:     sleep(0.10) 
   ...:     return True 
   ...:  
   ...: if __name__ == "__main__": 
   ...:     # Initialize RAY to use all of the processors. 
   ...:     num_cpus = psutil.cpu_count(logical=False) 
   ...:     ray.init(num_cpus=num_cpus) 
   ...:  
   ...:     num_docs = 1000000 
   ...:     num_dimensions = 300 
   ...:     chunk_size = 128 
   ...:     sim_pct = 0.82 
   ...:  
   ...:     # Initialize the array 
   ...:     index = np.random.random((num_docs, num_dimensions)).astype(dtype=np.float32) 
   ...:     index_array = np.arange(num_docs).reshape(1, num_docs) 
   ...:     index_array_id = ray.put(index_array) 
   ...:  
   ...:     calc_results = [] 
   ...:     i = 0 
   ...:     for count, start_doc_no in enumerate(range(0, num_docs, chunk_size)): 
   ...:         i += 1 
   ...:         size = min( chunk_size, num_docs - (start_doc_no) + 1 ) 
   ...:         # Get the query vector out of the index. 
   ...:         query_vector = index[start_doc_no:start_doc_no+size] 
   ...:         # Calculate the matrix multiplication. 
   ...:         result_transformed = np.matmul(index, query_vector.T).T 
   ...:         # Serialize the result matrix out for each client. 
   ...:         result_id = ray.put(result_transformed) 
   ...:         if i == 1: 
   ...:             # The first result_id binary number should be stored in result_id_special 
   ...:             # In this way, we can verify if this object id is evicted after filling up our  
   ...:             # plasma store by some random numpy array 
   ...:             # If this object id is not evicted, that means it is pinned, meaning if is  
   ...:             # not properly reference counted. 
   ...:             first_object_id = result_id.binary() 
   ...:         # Simulate multi-threading extracting the results of a cosine similarity calculation 
   ...:         for offset in range(chunk_size): 
   ...:             calc_results.append(calc_similarity.remote(sims=result_id, offset=offset )) 
   ...:             # , index_array=index_array_id)) 
   ...:         res = ray.get(calc_results) 
   ...:         calc_results.clear() 
   ...:         print('ref count to result_id {}'.format(len(gc.get_referrers(result_id)))) 
   ...:         print('Total number of ref counts in a ray cluster. {}'.format(ray.worker.global_worker.core_worker.get_all_reference_counts())) 
   ...:         if i == 5: 
   ...:             break 
   ...:     # It should contain the object id of the  
   ...:     print('first object id: {}'.format(first_object_id)) 
   ...:     print('fill up plasma store by big numpy arrays. This should evict the first_object_id from the plasma store.') 
   ...:     print('because if the data_transformed is garbage collected properly, it should be unpinned from plasma store') 
   ...:     print('and when plasma store is filled by numpy array, first_object_id should be evicted.') 
   ...:     for _ in range(40): 
   ...:         import numpy as np 
   ...:         ray.put(np.zeros(500 * 1024 * 1024, dtype=np.uint8)) 
   ...:     print('total ref count from a ray cluster after eviction: {}'.format(ray.worker.global_worker.core_worker.get_all_reference_counts())) 
   ...:     # this should fail as first_object_id is already evicted 
   ...:     print(ray.get(ray.ObjectID(first_object_id))) 

[ray] Forcing OMP_NUM_THREADS=1 to avoid performance degradation with many workers (issue #6998). You can override this by explicitly setting OMP_NUM_THREADS.
2020-02-12 00:10:11,932 INFO resource_spec.py:212 -- Starting Ray with 4.35 GiB memory available for workers and up to 2.19 GiB for objects. You can adjust these settings with ray.init(memory=<bytes>, object_store_memory=<bytes>).
2020-02-12 00:10:12,273 INFO services.py:1080 -- View the Ray dashboard at localhost:8265
2020-02-12 00:10:18,522 WARNING worker.py:289 -- OMP_NUM_THREADS=1 is set, this may slow down ray.put() for large objects (issue #6998).
ref count to result_id 1
Total number of ref counts in a ray cluster. {ObjectID(ffffffffffffffffffffffff0100008002000000): {'local': 1, 'submitted': 0}, ObjectID(ffffffffffffffffffffffff0100008001000000): {'local': 1, 'submitted': 0}}
ref count to result_id 1
Total number of ref counts in a ray cluster. {ObjectID(ffffffffffffffffffffffff0100008003000000): {'local': 1, 'submitted': 0}, ObjectID(ffffffffffffffffffffffff0100008001000000): {'local': 1, 'submitted': 0}}
ref count to result_id 1
Total number of ref counts in a ray cluster. {ObjectID(ffffffffffffffffffffffff0100008001000000): {'local': 1, 'submitted': 0}, ObjectID(ffffffffffffffffffffffff0100008004000000): {'local': 1, 'submitted': 0}}
ref count to result_id 1
Total number of ref counts in a ray cluster. {ObjectID(ffffffffffffffffffffffff0100008001000000): {'local': 1, 'submitted': 0}, ObjectID(ffffffffffffffffffffffff0100008005000000): {'local': 1, 'submitted': 0}}
ref count to result_id 1
Total number of ref counts in a ray cluster. {ObjectID(ffffffffffffffffffffffff0100008006000000): {'local': 1, 'submitted': 0}, ObjectID(ffffffffffffffffffffffff0100008001000000): {'local': 1, 'submitted': 0}}
first object id: b'\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\x01\x00\x00\x80\x02\x00\x00\x00'
fill up plasma store by big numpy arrays. This should evict the first_object_id from the plasma store.
because if the data_transformed is garbage collected properly, it should be unpinned from plasma store
and when plasma store is filled by numpy array, first_object_id should be evicted.
total ref count from a ray cluster after eviction: {ObjectID(ffffffffffffffffffffffff0100008006000000): {'local': 1, 'submitted': 0}, ObjectID(ffffffffffffffffffffffff0100008001000000): {'local': 1, 'submitted': 0}}
2020-02-12 00:10:57,108 WARNING worker.py:1515 -- Local object store memory usage:
num clients with quota: 0
quota map size: 0
pinned quota map size: 0
allocated bytes: 2092865189
allocation limit: 2347285708
pinned bytes: 520000477
(global lru) capacity: 2347285708
(global lru) used: 67.0078%
(global lru) num objects: 4
(global lru) num evictions: 41
(global lru) bytes evicted: 21446665725

2020-02-12 00:10:57,112 WARNING worker.py:1072 -- The task with ID ffffffffffffffffffffffff0100 is a driver task and so the object created by ray.put could not be reconstructed.
---------------------------------------------------------------------------
UnreconstructableError                    Traceback (most recent call last)
<ipython-input-1-184e5836123c> in <module>
     63     print('total ref count from a ray cluster after eviction: {}'.format(ray.worker.global_worker.core_worker.get_all_reference_counts()))
     64     # this should fail as first_object_id is already evicted
---> 65     print(ray.get(ray.ObjectID(first_object_id)))
     66 

~/work/ray/python/ray/worker.py in get(object_ids, timeout)
   1517                     raise value.as_instanceof_cause()
   1518                 else:
-> 1519                     raise value
   1520 
   1521         # Run post processors.

UnreconstructableError: Object ffffffffffffffffffffffff0100008002000000 is lost (either LRU evicted or deleted by user) and cannot be reconstructed. Try increasing the object store memory available with ray.init(object_store_memory=<bytes>) or setting object store limits with ray.remote(object_store_memory=<bytes>). See also: https://ray.readthedocs.io/en/latest/memory-management.html

相关问题