将Pyspark作业提交到DataProc群集(找不到作业)时出错



我有一个基于GCP的Python客户库库的脚本,该脚本旨在提供群集并向他们提交作业。当我运行脚本时,它成功将文件上传到Google存储,创建群集并提交作业。当它运行我的" wait_for_job()"函数时,该错误出现了,如下所示:

    Waiting for job to finish...
    Traceback (most recent call last):
       File "/Users/cdastmalchi/WGS_automation_python_SDK.py", line 
          174, in <module> main()
       File "/Users/cdastmalchi/WGS_automation_python_SDK.py", line 
          168, in main wait_for_job(dataproc, args.project_id, 
          region, args.cluster_name)
       File "/Users/cdastmalchi/WGS_automation_python_SDK.py", line 
          132, in wait_for_job
          jobId=job_id).execute()
       File "/anaconda/lib/python2.7/site-
          packages/oauth2client/util.py", line 137, in 
          positional_wrapper
          return wrapped(*args, **kwargs)
       File "/anaconda/lib/python2.7/site-
          packages/googleapiclient/http.py", line 842, in execute
          raise HttpError(resp, content, uri=self.uri)
    googleapiclient.errors.HttpError: <HttpError 404 when requesting 
    https://dataproc.googleapis.com/v1/projects/my-
    project/regions/us-east4/jobs/my-cluster?alt=json returned "Job 
    not found my-project/my-cluster"> 

这是我的wait_for_job()函数:

    def wait_for_job(dataproc, project, region, job_id):
        print('Waiting for job to finish...')
        while True:
           result = dataproc.projects().regions().jobs().get(
              projectId=project,
              region=region,
              jobId=job_id).execute()
           # Handle exceptions
          if result['status']['state'] == 'ERROR':
              raise Exception(result['status']['details'])
          elif result['status']['state'] == 'DONE':
              print('Job finished.')
              return result

这是我的create_cluster()功能:

    def create_cluster(dataproc, project, zone, region, cluster_name, master_type, worker_type):
        print('Creating cluster...')
        zone_uri = 
 'https://www.googleapis.com/compute/v1/projects/{}/zones/{}'.format(
        project, zone)
        cluster_data = {
            'projectId': project,
            'clusterName': cluster_name,
            'config': {
                'gceClusterConfig': {
                      'zoneUri': zone_uri,
                 },
                'masterConfig': { 
                      'machineTypeUri' : master_type,
                 },
                'workerConfig': {
                      'machineTypeUri' : worker_type,
                 },
             }
          }
        result = dataproc.projects().regions().clusters().create(
             projectId=project,
             region=region,
             body=cluster_data).execute()
        return result

您认为问题与地区/区域有关吗?我的群集在us-east4-b中,尝试的作业提交在us-east4中。

您的错误消息显示您的代码正在args.cluster_name中传递给wait_for_job,当您的wait_for_job的方法签名在最后一个参数中期望jobid,而不是cluster_name:

   File "/Users/cdastmalchi/WGS_automation_python_SDK.py", line 
      168, in main wait_for_job(dataproc, args.project_id, 
      region, args.cluster_name)

您需要将该参数更改为您的JobID。

最新更新