如何模拟AWS lambda中的多处理.pool.map()



python在AWS lambda上不支持multiprocessing.Pool.map(),如其他问题所示。请注意,另一个问题是问为什么它不起作用。这个问题是不同的,我在问如何模仿基本支持的功能。

另一个问题的答案之一给了我们这个代码:

# Python 3.6
from multiprocessing import Pipe, Process
def myWorkFunc(data, connection):
    result = None
    # Do some work and store it in result
    if result:
        connection.send([result])
    else:
        connection.send([None])

def myPipedMultiProcessFunc():
    # Get number of available logical cores
    plimit = multiprocessing.cpu_count()
    # Setup management variables
    results = []
    parent_conns = []
    processes = []
    pcount = 0
    pactive = []
    i = 0
    for data in iterable:
        # Create the pipe for parent-child process communication
        parent_conn, child_conn = Pipe()
        # create the process, pass data to be operated on and connection
        process = Process(target=myWorkFunc, args=(data, child_conn,))
        parent_conns.append(parent_conn)
        process.start()
        pcount += 1
        if pcount == plimit: # There is not currently room for another process
            # Wait until there are results in the Pipes
            finishedConns = multiprocessing.connection.wait(parent_conns)
            # Collect the results and remove the connection as processing
            # the connection again will lead to errors
            for conn in finishedConns:
                results.append(conn.recv()[0])
                parent_conns.remove(conn)
                # Decrement pcount so we can add a new process
                pcount -= 1
    # Ensure all remaining active processes have their results collected
    for conn in parent_conns:
        results.append(conn.recv()[0])
        conn.close()
    # Process results as needed

可以修改此示例代码以支持multiprocessing.Pool.map()

到目前为止我尝试了什么

我分析了上面的代码,我看不到要执行该函数或数据的参数,因此我推断出它执行与multiprocessing.Pool.map()相同的函数。尚不清楚代码的作用,除了演示可以组装到解决方案中的构建块。

这是"为我写我的代码"问题?

是,在某种程度上是。这个问题影响了数千名Python开发人员,如果我们所有人共享相同的代码,而不是强迫每个遇到这一行动并开发的人,那么它将对世界经济,较少的绿色房屋排放等效率更高。他们自己的解决方法。我希望我通过将其提炼成一个明确的问题来完成我的一部分,并准备出发。

我能够为自己的测试做好工作。我已经基于此链接的代码:https://aws.amazon.com/blogs/compute/parallel-parelalle-processing-in-python-withon-with-with-aws-lambda/

nb1:您必须将内存分配增加到lambda函数。默认数量最小,多处理的性能不会增加。使用最大的我的帐户可以分配(3008MB(以下数字。

nb2:我在这里完全忽略了最大进程。我的用法没有很多要处理的要素。

使用以下代码,用法是:

work = funcmap(yourfunction,listofstufftoworkon)
yourresults = work.run()

从我的笔记本电脑运行:

jumper@jumperdebian[3333] ~/scripts/tmp  2019-09-04 11:52:30
└─ $ ∙ python3 -c "import tst; tst.lambda_handler(None,None)"
results : [(35, 9227465), (35, 9227465), (35, 9227465), (35, 9227465)]
SP runtime : 9.574460506439209
results : [(35, 9227465), (35, 9227465), (35, 9227465), (35, 9227465)]
MP runtime : 6.422513484954834

从AWS运行:

Function Logs:
START RequestId: 075a92c0-7c4f-4f48-9820-f394ee899a97 Version: $LATEST
results : [(35, 9227465), (35, 9227465), (35, 9227465), (35, 9227465)]
SP runtime : 12.135798215866089
results : [(35, 9227465), (35, 9227465), (35, 9227465), (35, 9227465)]
MP runtime : 7.293526887893677
END RequestId: 075a92c0-7c4f-4f48-9820-f394ee899a97

这是测试代码:

import time
from multiprocessing import Process, Pipe
import boto3
class funcmap(object):
    fmfunction=None
    fmlist=None
    def __init__(self,pfunction,plist):
        self.fmfunction=pfunction
        self.fmlist=plist
    def calculation(self, pfunction, pload, conn):
        panswer=pfunction(pload)
        conn.send([pload,panswer])
        conn.close()
    def run(self):
        datalist = self.fmlist
        processes = []
        parent_connections = []
        for datum in datalist:
            parent_conn, child_conn = Pipe()
            parent_connections.append(parent_conn)
            process = Process(target=self.calculation, args=(self.fmfunction, datum, child_conn,))
            processes.append(process)
        pstart=time.time()
        for process in processes:
            process.start()
            #print("starting at t+ {} s".format(time.time()-pstart))
        for process in processes:
            process.join()
            #print("joining at t+ {} s".format(time.time()-pstart))
        results = []
        for parent_connection in parent_connections:
            resp=parent_connection.recv()
            results.append((resp[0],resp[1]))
        return results

def fibo(n):
    if n <= 2 : return 1
    return fibo(n-1)+fibo(n-2)
def lambda_handler(event, context):
    #worklist=[22,23,24,25,26,27,28,29,30,31,32,31,30,29,28,27,26,27,28,29]
    #worklist=[22,23,24,25,26,27,28,29,30]
    worklist=[30,30,30,30]
    #worklist=[30]
    _start = time.time()
    results=[]
    for a in worklist:
        results.append((a,fibo(a)))
    print("results : {}".format(results))
    _end = time.time()
    print("SP runtime : {}".format(_end-_start))
    _mstart = time.time()
    work = funcmap(fibo,worklist)
    results = work.run()
    print("results : {}".format(results))
    _mend = time.time()
    print("MP runtime : {}".format(_mend-_mstart))

希望它有帮助。

我遇到了同样的问题,最终在multiprocessing.Pool周围实现了自己的简单包装器。绝对不是防弹的,但对于简单用例(作为倒数替换(而言足够。

https://stackoverflow.com/a/63633248/158049

相关内容

  • 没有找到相关文章

最新更新