ubuntu python子进程中的SLURM阵列作业BASH脚本

xxe27gdn  于 2022-12-03  发布在  Python
关注(0)|答案(1)|浏览(146)

更新:我可以用这行代码从SLURM_JOB_ID中得到一个变量赋值。JOBID='echo ${SLURM_JOB_ID}'但是,我还没有让SLURM_ARRAY_JOB_ID将自己赋值给JOBID。
由于需要支持现有的HPC工作流。我需要在一个python子进程中传递一个bash脚本。它在openpbs上运行得很好,现在我需要将它转换为SLURM。我让它在Ubuntu 20.04上托管的SLURM中大部分运行,除了作业数组没有被填充。下面是一个代码片段,它被大大地剥离到了相关的内容。
我有一个具体的问题。为什么行JOBID=${SLURM_JOB_ID}和JOBID=${SLURM_ARRAY_JOB_ID}没有得到它们的赋值呢?我尝试过使用heredoc和各种bashisms都没有成功。
代码当然可以更干净,这是多个人没有一个共同标准的结果。
这些都是相关的
Accessing task id for array jobs
Handling bash system variables and slurm environmental variables in a wrapper script

sbatch_arguments = "#SBATCH --array=1-{}".format(get_instance_count())

       proc = Popen('ssh ${USER}@server_hostname /apps/workflows/slurm_wrapper.sh sbatch', shell=True, stdin=PIPE, stdout=PIPE, stderr=PIPE, close_fds=True)
        job_string = """#!/bin/bash -x
        #SBATCH --job-name=%(name)s
        #SBATCH -t %(walltime)s
        #SBATCH --cpus-per-task %(processors)s
        #SBATCH --mem=%(memory)s
        %(sbatch_args)s

        # Assign JOBID
        if [ %(num_jobs)s -eq 1 ]; then
            JOBID=${SLURM_JOB_ID}
        else
            JOBID=${SLURM_ARRAY_JOB_ID}
        fi

        exit ${returnCode}

        """ % ({"walltime": walltime
                ,"processors": total_cores
                ,"binary": self.binary_name
                ,"name": ''.join(x for x in self.binary_name if x.isalnum())
                ,"memory": memory
                ,"num_jobs": self.get_instance_count()
                ,"sbatch_args": sbatch_arguments
                })

        # Send job_string to sbatch
        stdout, stderr = proc.communicate(input=job_string)
xwbd5t1u

xwbd5t1u1#

我通过将SBATCH指令作为args传递给sbatch命令来解决这个问题

sbatch_args = """--job-name=%(name)s --time=%(walltime)s --partition=defq --cpus-per-task=%(processors)s --mem=%(memory)s""" % (
                    {"walltime": walltime
                    ,"processors": cores
                    ,"name": ''.join(x for x in self.binary_name if x.isalnum())
                    ,"memory": memory
                    })

    # Open a pipe to the sbatch command. {tee /home/ahs/schuec1/_stderr_slurmqueue | sbatch; }
    # The SLURM variables SLURM_ARRAY_* do not exist until after sbatch is called.
    # Popen.communicate has BASH interpret all variables at the same time the script is sent.
    # Because of that, the job array needs to be declared prior to the rest of the BASH script.

    # It seems further that all SBATCH directives are not being evaultated when passed via a string with .communicate
    # due to this, all SBATCH directives will be passed as arguments to the slurm_wrapper.sh as the first command to the Popen pipe.

    proc = Popen('ssh ${USER}@ch3lahpcgw1.corp.cat.com /apps/workflows/slurm_wrapper.sh sbatch %s' % sbatch_args,
    shell=True, stdin=PIPE, stdout=PIPE, stderr=PIPE,
    close_fds=True,
    executable='/bin/bash')

相关问题