如何从lambda函数在amazon emr上执行spark submit?

rdlzhqv9  于 2021-06-24  发布在  Hive
关注(0)|答案(2)|浏览(488)

我想基于s3上的文件上传事件在aws emr集群上执行spark submit作业。我正在使用aws lambda函数来捕获事件,但是我不知道如何从lambda函数提交emr集群上的spark submit作业。
我搜索的大多数答案都谈到在emr集群中添加一个步骤。但我不知道是否可以在添加的步骤中添加任何步骤来触发“spark submit--with args”。

vwhgwdsa

vwhgwdsa1#

你可以的,我上周也做了同样的事!
使用boto3 for python(其他语言肯定会有类似的解决方案)可以使用定义的步骤启动集群,或者将步骤附加到已经启动的集群。

使用步骤定义集群

def lambda_handler(event, context):
    conn = boto3.client("emr")        
    cluster_id = conn.run_job_flow(
        Name='ClusterName',
        ServiceRole='EMR_DefaultRole',
        JobFlowRole='EMR_EC2_DefaultRole',
        VisibleToAllUsers=True,
        LogUri='s3n://some-log-uri/elasticmapreduce/',
        ReleaseLabel='emr-5.8.0',
        Instances={
            'InstanceGroups': [
                {
                    'Name': 'Master nodes',
                    'Market': 'ON_DEMAND',
                    'InstanceRole': 'MASTER',
                    'InstanceType': 'm3.xlarge',
                    'InstanceCount': 1,
                },
                {
                    'Name': 'Slave nodes',
                    'Market': 'ON_DEMAND',
                    'InstanceRole': 'CORE',
                    'InstanceType': 'm3.xlarge',
                    'InstanceCount': 2,
                }
            ],
            'Ec2KeyName': 'key-name',
            'KeepJobFlowAliveWhenNoSteps': False,
            'TerminationProtected': False
        },
        Applications=[{
            'Name': 'Spark'
        }],
        Configurations=[{
            "Classification":"spark-env",
            "Properties":{},
            "Configurations":[{
                "Classification":"export",
                "Properties":{
                    "PYSPARK_PYTHON":"python35",
                    "PYSPARK_DRIVER_PYTHON":"python35"
                }
            }]
        }],
        BootstrapActions=[{
            'Name': 'Install',
            'ScriptBootstrapAction': {
                'Path': 's3://path/to/bootstrap.script'
            }
        }],
        Steps=[{
            'Name': 'StepName',
            'ActionOnFailure': 'TERMINATE_CLUSTER',
            'HadoopJarStep': {
                'Jar': 's3n://elasticmapreduce/libs/script-runner/script-runner.jar',
                'Args': [
                    "/usr/bin/spark-submit", "--deploy-mode", "cluster",
                    's3://path/to/code.file', '-i', 'input_arg', 
                    '-o', 'output_arg'
                ]
            }
        }],
    )
    return "Started cluster {}".format(cluster_id)

将步骤附加到已运行的群集

根据此处

def lambda_handler(event, context):
    conn = boto3.client("emr")
    # chooses the first cluster which is Running or Waiting
    # possibly can also choose by name or already have the cluster id
    clusters = conn.list_clusters()
    # choose the correct cluster
    clusters = [c["Id"] for c in clusters["Clusters"] 
                if c["Status"]["State"] in ["RUNNING", "WAITING"]]
    if not clusters:
        sys.stderr.write("No valid clusters\n")
        sys.stderr.exit()
    # take the first relevant cluster
    cluster_id = clusters[0]
    # code location on your emr master node
    CODE_DIR = "/home/hadoop/code/"

    # spark configuration example
    step_args = ["/usr/bin/spark-submit", "--spark-conf", "your-configuration",
                 CODE_DIR + "your_file.py", '--your-parameters', 'parameters']

    step = {"Name": "what_you_do-" + time.strftime("%Y%m%d-%H:%M"),
            'ActionOnFailure': 'CONTINUE',
            'HadoopJarStep': {
                'Jar': 's3n://elasticmapreduce/libs/script-runner/script-runner.jar',
                'Args': step_args
            }
        }
    action = conn.add_job_flow_steps(JobFlowId=cluster_id, Steps=[step])
    return "Added step: %s"%(action)
xa9qqrwz

xa9qqrwz2#

如果要使用spark submit命令执行spark jar,请使用aws lambda函数python代码:

from botocore.vendored import requests

import json

def lambda_handler(event, context):

headers = { "content-type": "application/json" }

  url = 'http://ip-address.ec2.internal:8998/batches'

  payload = {

    'file' : 's3://Bucket/Orchestration/RedshiftJDBC41.jar 
s3://Bucket/Orchestration/mysql-connector-java-8.0.12.jar 

s3://Bucket/Orchestration/SparkCode.jar',

    'className' : 'Main Class Name',

    'args' : [event.get('rootPath')]

  }

  res = requests.post(url, data = json.dumps(payload), headers = headers, verify = False)

  json_data = json.loads(res.text)

  return json_data.get('id')

相关问题