C扩展中IO绑定线程的GIL(HDF5)



我有一个采样应用程序,它每秒获取250,000个样本,将它们缓冲在内存中,并最终附加到pandas提供的HDFStore。总的来说,这很好。但是,我有一个线程运行并不断清空数据采集设备(DAQ),它需要定期运行。大约一秒钟的偏差往往会破坏事物。以下是所观察到的时间的极端情况。Start表示DAQ读取开始,Finish表示读取结束,IO表示HDF写入(DAQIO都出现在单独的线程中)。

Start        : 2016-04-07 12:28:22.241303
IO (1)       : 2016-04-07 12:28:22.241303
Finish       : 2016-04-07 12:28:46.573440 (0.16 Hz, 24331.26 ms)
IO Done (1)  : 2016-04-07 12:28:46.573440 (24332.39 ms)

正如您所看到的,执行此写入需要24秒(典型的写入时间约为40ms)。我正在写入的HDD没有加载,所以这种延迟不应该是由争用引起的(它在运行时的利用率约为7%)。我已禁用HDFStore写入的索引。我的应用程序运行许多其他线程,所有这些线程都打印状态字符串,因此IO任务似乎正在阻塞所有其他线程。我花了相当多的时间来遍历代码,以找出哪里的速度变慢了,而且它总是在C扩展提供的方法中,这就引出了我的问题。。

  1. Python(我使用的是3.5)能在C扩展中抢占执行吗?并发:用C/C++编写的Python扩展是否受到全局解释器锁的影响?似乎表明除非扩展特别让步,否则不会
  2. Pandas的HDF5C代码是否实现了任何I/O让步?如果是,这是否意味着延迟是由于CPU限制的任务造成的?我已禁用索引
  3. 有什么建议可以让我在时间上保持一致吗?我正在考虑将HDF5代码转移到另一个进程中。不过,这只会在一定程度上有所帮助,因为我真的不能容忍大约20秒的写作,尤其是当它们不可预测的时候

下面是一个可以运行以查看问题的示例:

import pandas as pd
import numpy as np
from timeit import default_timer as timer
import datetime
import random
import threading
import time
def write_samples(store, samples, overwrite):
    frame = pd.DataFrame(samples, dtype='float64')
    if not overwrite:
        store.append("df", frame, format='table', index=False)
    else:
        store.put("df", frame, format='table', index=False)
def begin_io():
    store = pd.HDFStore("D:\slow\test" + str(random.randint(0,100)) + ".h5", mode='w', complevel=0)
    counter = 0
    while True:
        data = np.random.rand(50000, 1)
        start_time = timer()
        write_samples(store, data, counter == 0)
        end_time = timer()
        print("IO Done      : %s (%.2f ms, %d)" % (datetime.datetime.now(), (end_time - start_time) * 1000, counter))
        counter += 1
    store.close()
def dummy_thread():
    previous = timer()
    while True:
        now = timer()
        print("Dummy Thread  : %s (%d ms)" % (datetime.datetime.now(), (now - previous) * 1000))
        previous = now
        time.sleep(0.01)

if __name__ == '__main__':
    threading.Thread(target=dummy_thread).start()
    begin_io()

您将得到类似于以下内容的输出:

IO Done      : 2016-04-08 10:51:14.100479 (3.63 ms, 470)
Dummy Thread  : 2016-04-08 10:51:14.101484 (12 ms)
IO Done      : 2016-04-08 10:51:14.104475 (3.01 ms, 471)
Dummy Thread  : 2016-04-08 10:51:14.576640 (475 ms)
IO Done      : 2016-04-08 10:51:14.576640 (472.00 ms, 472)
Dummy Thread  : 2016-04-08 10:51:14.897756 (321 ms)
IO Done      : 2016-04-08 10:51:14.898782 (320.79 ms, 473)
IO Done      : 2016-04-08 10:51:14.901772 (3.29 ms, 474)
IO Done      : 2016-04-08 10:51:14.905773 (2.84 ms, 475)
IO Done      : 2016-04-08 10:51:14.908775 (2.96 ms, 476)
Dummy Thread  : 2016-04-08 10:51:14.909777 (11 ms)

答案是否定的,这些编写器不会发布GIL。请参阅此处的文档。我知道你实际上并没有试图用多个线程进行写操作,但这应该会给你提供线索。当写操作发生时,会持有强锁,以防止多次写操作。PyTablesh5py都将此作为HDF5标准的一部分。

你可以看看SWMR,尽管它没有得到熊猫的直接支持。这里和这里的PyTables文档指出了解决方案。这些通常包括有一个单独的进程从队列中提取数据并写入

在任何情况下,这通常都是一种可扩展性更强的模式。

感谢您提供工作代码。我修改了它以获得一些见解,后来创建了使用多处理的修改版本。

修改的线程版本

所有的修改只是为了获得更多的信息,而不是概念上的改变。集所有于一身文件CCD_ 15并且被逐部分地注释。

照常进口:

import pandas as pd
import numpy as np
from timeit import default_timer as timer
import datetime
import random
import threading
import logging

write_samples得到了一些日志:

def write_samples(store, samples, overwrite):
    wslog = logging.getLogger("write_samples")
    wslog.info("starting")
    frame = pd.DataFrame(samples, dtype='float64')
    if overwrite:
        store.put("df", frame, format='table', index=False)
    else:
        store.append("df", frame, format='table', index=False)
    wslog.info("finished")

begin_io获得最大对偶,超过该时间将导致警告日志条目:

def begin_io(maxduration=500):
    iolog = logging.getLogger("begin_io")
    iolog.info("starting")
    try:
        fname = "data/tab" + str(random.randint(0, 100)) + ".h5"
        iolog.debug("opening store %s", fname)
        with pd.HDFStore(fname, mode='w', complevel=0) as store:
            iolog.debug("store %s open", fname)
            counter = 0
            while True:
                data = np.random.rand(50000, 1)
                start_time = timer()
                write_samples(store, data, counter == 0)
                end_time = timer()
                duration = (end_time - start_time) * 1000
                iolog.debug("IO Done      : %s (%.2f ms, %d)",
                            datetime.datetime.now(),
                            duration,
                            counter)
                if duration > maxduration:
                    iolog.warning("Long duration %s", duration)
                counter += 1
    except Exception:
        iolog.exception("oops")
    finally:
        iolog.info("finished")

dummy_thread被修改为正确停止,如果时间过长,还会发出警告:

def dummy_thread(pill2kill, maxduration=500):
    dtlog = logging.getLogger("dummy_thread")
    dtlog.info("starting")
    try:
        previous = timer()
        while not pill2kill.wait(0.01):
            now = timer()
            duration = (now - previous) * 1000
            dtlog.info("Dummy Thread  : %s (%d ms)",
                       datetime.datetime.now(),
                       duration)
            if duration > maxduration:
                dtlog.warning("Long duration %s", duration)
            previous = now
        dtlog.debug("stopped looping.")
    except Exception:
        dtlog.exception("oops")
    finally:
        dtlog.info("finished")

最后我们称之为一切。随意修改日志级别,WARNING显示的次数过多,CCD_ 20和CCD_。

if __name__ == '__main__':
    logformat = '%(asctime)-15s [%(levelname)s] - %(name)s: %(message)s'
    logging.basicConfig(format=logformat,
                        level=logging.WARNING)
    pill2kill = threading.Event()
    t = threading.Thread(target=dummy_thread, args=(pill2kill, 500))
    t.start()
    try:
        begin_io(500)
    finally:
        pill2kill.set()
        t.join()

运行代码我得到的结果正如你所描述的:

2016-04-08 15:29:11,428 [WARNING] - begin_io: Long duration 5169.03591156
2016-04-08 15:29:11,429 [WARNING] - dummy_thread: Long duration 5161.45706177
2016-04-08 15:29:27,305 [WARNING] - begin_io: Long duration 1447.40581512
2016-04-08 15:29:27,306 [WARNING] - dummy_thread: Long duration 1450.75201988
2016-04-08 15:29:32,893 [WARNING] - begin_io: Long duration 1610.98194122
2016-04-08 15:29:32,894 [WARNING] - dummy_thread: Long duration 1612.98394203
2016-04-08 15:29:34,930 [WARNING] - begin_io: Long duration 823.182821274
2016-04-08 15:29:34,930 [WARNING] - dummy_thread: Long duration 815.275907516
2016-04-08 15:29:43,640 [WARNING] - begin_io: Long duration 510.369062424
2016-04-08 15:29:43,640 [WARNING] - dummy_thread: Long duration 511.776924133

从这些值可以清楚地看出,虽然begin_io非常繁忙和延迟(可能是在数据期间dummy_thread也被延迟几乎相同的时间量。

具有多处理功能的版本-运行良好

我已经修改了代码以在多个进程中运行,从那以后,它真的不会阻止dummy_thread

2016-04-08 15:38:12,487 [WARNING] - begin_io: Long duration 755.397796631
2016-04-08 15:38:14,127 [WARNING] - begin_io: Long duration 1434.60512161
2016-04-08 15:38:15,725 [WARNING] - begin_io: Long duration 848.396062851
2016-04-08 15:38:24,290 [WARNING] - begin_io: Long duration 1129.17089462
2016-04-08 15:38:25,609 [WARNING] - begin_io: Long duration 1059.08918381
2016-04-08 15:38:31,165 [WARNING] - begin_io: Long duration 646.969079971
2016-04-08 15:38:37,273 [WARNING] - begin_io: Long duration 1699.17201996
2016-04-08 15:38:43,788 [WARNING] - begin_io: Long duration 1555.341959
2016-04-08 15:38:47,765 [WARNING] - begin_io: Long duration 639.196872711
2016-04-08 15:38:54,269 [WARNING] - begin_io: Long duration 1690.57011604
2016-04-08 15:39:06,397 [WARNING] - begin_io: Long duration 1998.33416939
2016-04-08 15:39:16,980 [WARNING] - begin_io: Long duration 2558.51006508
2016-04-08 15:39:21,688 [WARNING] - begin_io: Long duration 1132.73501396
2016-04-08 15:39:26,450 [WARNING] - begin_io: Long duration 876.784801483
2016-04-08 15:39:29,809 [WARNING] - begin_io: Long duration 709.135055542
2016-04-08 15:39:31,748 [WARNING] - begin_io: Long duration 677.506923676
2016-04-08 15:39:41,854 [WARNING] - begin_io: Long duration 770.184993744

多处理的代码在这里:

import pandas as pd
import numpy as np
from timeit import default_timer as timer
import datetime
import random
import multiprocessing
import time
import logging

def write_samples(store, samples, overwrite):
    wslog = logging.getLogger("write_samples")
    wslog.info("starting")
    frame = pd.DataFrame(samples, dtype='float64')
    if overwrite:
        store.put("df", frame, format='table', index=False)
    else:
        store.append("df", frame, format='table', index=False)
    wslog.info("finished")

def begin_io(pill2kill, maxduration=500):
    iolog = logging.getLogger("begin_io")
    iolog.info("starting")
    try:
        fname = "data/tab" + str(random.randint(0, 100)) + ".h5"
        iolog.debug("opening store %s", fname)
        with pd.HDFStore(fname, mode='w', complevel=0) as store:
            iolog.debug("store %s open", fname)
            counter = 0
            while not pill2kill.wait(0):
                data = np.random.rand(50000, 1)
                start_time = timer()
                write_samples(store, data, counter == 0)
                end_time = timer()
                duration = (end_time - start_time) * 1000
                iolog.debug( "IO Done      : %s (%.2f ms, %d)",
                            datetime.datetime.now(),
                            duration,
                            counter)
                if duration > maxduration:
                    iolog.warning("Long duration %s", duration)
                counter += 1
    except Exception:
        iolog.exception("oops")
    finally:
        iolog.info("finished")

def dummy_thread(pill2kill, maxduration=500):
    dtlog = logging.getLogger("dummy_thread")
    dtlog.info("starting")
    try:
        previous = timer()
        while not pill2kill.wait(0.01):
            now = timer()
            duration = (now - previous) * 1000
            dtlog.info( "Dummy Thread  : %s (%d ms)",
                       datetime.datetime.now(),
                       duration)
            if duration > maxduration:
                dtlog.warning("Long duration %s", duration)
            previous = now
        dtlog.debug("stopped looping.")
    except Exception:
        dtlog.exception("oops")
    finally:
        dtlog.info("finished")

if __name__ == '__main__':
    logformat = '%(asctime)-15s [%(levelname)s] - %(name)s: %(message)s'
    logging.basicConfig(format=logformat,
                        level=logging.WARNING)
    pill2kill = multiprocessing.Event()
    dp = multiprocessing.Process(target=dummy_thread, args=(pill2kill, 500,))
    dp.start()
    try:
        p = multiprocessing.Process(target=begin_io, args=(pill2kill, 500,))
        p.start()
        time.sleep(100)
    finally:
        pill2kill.set()
        dp.join()
        p.join()

结论

将数据写入HDF5文件确实会阻塞其他线程,并且需要多处理版本。

如果您希望dummy_thread做一些实际的工作(比如收集要存储的数据),并且您希望将数据从这里发送到HDF5序列化程序,您将不得不使用某种消息传递multiprocessing.QueuePipe或可能使用ZeroMQ(例如PUSH-PULL套接字成对)。使用ZeroMQ,您甚至可以在另一台计算机上保存数据。

编辑/警告:如果代码有时无法保存数据,我将其用于测量性能,但没有使其防水。当Ctrl-C在处理过程中,有时我会得到损坏的文件。我认为这个问题超出了这个问题的范围(这个问题应该通过小心停止运行过程来解决)。

最新更新