python 如何在SLURM中分配内存?

plupiseo  于 2023-01-24  发布在  Python
关注(0)|答案(1)|浏览(323)

我是Slurm的新手。下面,我想执行一个Python文件,它需要92.3GiB。我分配了120GB,但我的代码仍然返回内存错误。
submit_venv.sh

#/bin/bash

#SBATCH --account=melchua
#SBATCH --mem=120GB
#SBATCH --time=2`:00:00

module load python/3.8.2
python3 1.methylation_data_processing.py

使用./submit_venv.sh运行脚本
追溯:

File "1.methylation_data_processing.py", line 49, in <module>
    meth_clin = pd.concat([gene_symbol, meth_clin])  # add gene_symbol to dataframe
  File "/scg/apps/software/python/3.8.2/lib/python3.8/site-packages/pandas/util/_decorators.py", line 311, in wrapper
    return func(*args, **kwargs)
  File "/scg/apps/software/python/3.8.2/lib/python3.8/site-packages/pandas/core/reshape/concat.py", line 307, in concat
    return op.get_result()
  File "/scg/apps/software/python/3.8.2/lib/python3.8/site-packages/pandas/core/reshape/concat.py", line 532, in get_result
    new_data = concatenate_managers(
  File "/scg/apps/software/python/3.8.2/lib/python3.8/site-packages/pandas/core/internals/concat.py", line 222, in concatenate_managers
    values = _concatenate_join_units(join_units, concat_axis, copy=copy)
  File "/scg/apps/software/python/3.8.2/lib/python3.8/site-packages/pandas/core/internals/concat.py", line 486, in _concatenate_join_units
    to_concat = [
  File "/scg/apps/software/python/3.8.2/lib/python3.8/site-packages/pandas/core/internals/concat.py", line 487, in <listcomp>
    ju.get_reindexed_values(empty_dtype=empty_dtype, upcasted_na=upcasted_na)
  File "/scg/apps/software/python/3.8.2/lib/python3.8/site-packages/pandas/core/internals/concat.py", line 441, in get_reindexed_values
    missing_arr = np.empty(self.shape, dtype=empty_dtype)
numpy.core._exceptions.MemoryError: Unable to allocate 92.3 GiB for an array with shape (111331, 111332) and data type object
0tdrvxhp

0tdrvxhp1#

假设slurm.conf文件正确地将RAM列为可消耗资源(例如,SelectTypeParameters = CR_CPU_Memory),那么这个问题可能与Slurm无关,很可能与操作系统不想为单个任务分配那么多内存有关。Unable to allocate array with shape and data type.

相关问题