appengine mapreduce达到内存限制



我正在开发appengine mapreduce函数,并修改了演示以满足我的目的。基本上,我有一百万行以上的行,格式如下:userid,time1,time2。我的目的是为每个用户ID找到时间1和时间2之间的差异。

然而,当我在谷歌应用引擎上运行时,我在日志部分遇到了这个错误消息:

在总共处理了130个请求后,超过了软专用内存限制180.56 MB在处理此请求时,发现处理此请求的进程使用了太多内存,因此被终止。这可能会导致在向您的应用程序发出下一个请求时使用新流程。如果您经常看到此消息,则您的应用程序中可能存在内存泄漏

def time_count_map(data):
  """Time count map function."""
  (entry, text_fn) = data
  text = text_fn()
  try:
    q = text.split('n')
    for m in q:
        reader = csv.reader([m.replace('', '')], skipinitialspace=True)
        for s in reader:
            """Calculate time elapsed"""
            sdw = s[1]
            start_date = time.strptime(sdw,"%m/%d/%y %I:%M:%S%p")
            edw = s[2]
            end_date = time.strptime(edw,"%m/%d/%y %I:%M:%S%p")
            time_difference = time.mktime(end_date) - time.mktime(start_date)
            yield (s[0], time_difference)
  except IndexError, e:
    logging.debug(e)

def time_count_reduce(key, values):
  """Time count reduce function."""
  time = 0.0
  for subtime in values:
    time += float(subtime)
    realtime = int(time)
  yield "%s: %dn" % (key, realtime)

有人能建议我如何更好地优化代码吗?谢谢

编辑:

这是管道处理程序:

class TimeCountPipeline(base_handler.PipelineBase):
  """A pipeline to run Time count demo.
  Args:
    blobkey: blobkey to process as string. Should be a zip archive with
      text files inside.
  """
  def run(self, filekey, blobkey):
    logging.debug("filename is %s" % filekey)
    output = yield mapreduce_pipeline.MapreducePipeline(
        "time_count",
        "main.time_count_map",
        "main.time_count_reduce",
        "mapreduce.input_readers.BlobstoreZipInputReader",
        "mapreduce.output_writers.BlobstoreOutputWriter",
        mapper_params={
            "blob_key": blobkey,
        },
        reducer_params={
            "mime_type": "text/plain",
        },
        shards=32)
    yield StoreOutput("TimeCount", filekey, output)

Mapreduce.yaml:

mapreduce:
- name: Make messages lowercase
  params:
  - name: done_callback
    value: /done
  mapper:
    handler: main.lower_case_posts
    input_reader: mapreduce.input_readers.DatastoreInputReader
    params:
    - name: entity_kind
      default: main.Post
    - name: processing_rate
      default: 100
    - name: shard_count
      default: 4
- name: Make messages upper case
  params:
  - name: done_callback
    value: /done
  mapper:
    handler: main.upper_case_posts
    input_reader: mapreduce.input_readers.DatastoreInputReader
    params:
    - name: entity_kind
      default: main.Post
    - name: processing_rate
      default: 100
    - name: shard_count
      default: 4

其余的文件与演示完全相同。

我已经在dropbox上上传了我的代码副本:http://dl.dropbox.com/u/4288806/demo%20compressed%20fail%20memory.zip

还可以考虑在代码中的常规点调用gc.collect()。我看到了一些关于超出软内存限制的SO问题,这些问题通过调用gc.collect()得到了缓解,其中大多数都与blobstore有关。

输入文件的大小可能超过了软内存限制。对于大文件,请使用BlobstoreLineInputReaderBlobstoreZipLineInputReader

这些输入读取器传递与map函数不同的东西,它们传递文件和文本行中的start_position

您的map函数可能看起来像:

def time_count_map(data):
    """Time count map function."""
    text = data[1]
    try:
        reader = csv.reader([text.replace('', '')], skipinitialspace=True)
        for s in reader:
            """Calculate time elapsed"""
            sdw = s[1]
            start_date = time.strptime(sdw,"%m/%d/%y %I:%M:%S%p")
            edw = s[2]
            end_date = time.strptime(edw,"%m/%d/%y %I:%M:%S%p")
            time_difference = time.mktime(end_date) - time.mktime(start_date)
            yield (s[0], time_difference)
    except IndexError, e:
        logging.debug(e)

使用BlobstoreLineInputReader可以让作业运行得更快,因为它可以使用多个碎片,最多256个,但这意味着你需要上传未压缩的文件,这可能会很痛苦。我通过将压缩文件上传到EC2windows服务器来处理它,然后从那里解压缩和上传,因为上游带宽太大了。

最新更新