熊猫"pandas.errors.ParserError: Error tokenizing data. C error: Unknown error in IO callback"



我正在使用熊猫来read_csv一个 3.8 Gig 的文本文件,管道分隔,但在将文件读入内存时出错。

这是我的read_in_files()函数中抛出的完整错误:

错误:

Reading in file C:UserscdabelDesktop_TempMaster_Extract_Data_Mart_201909240935.txt
Traceback (most recent call last):
File "<stdin>", line 10, in <module>
File "<stdin>", line 7, in read_in_files
File "c:python36libsite-packagespandasioparsers.py", line 685, in parser_f
return _read(filepath_or_buffer, kwds)
File "c:python36libsite-packagespandasioparsers.py", line 463, in _read
data = parser.read(nrows)
File "c:python36libsite-packagespandasioparsers.py", line 1154, in read
ret = self._engine.read(nrows)
File "c:python36libsite-packagespandasioparsers.py", line 2048, in read
data = self._reader.read(nrows)
File "pandas_libsparsers.pyx", line 879, in pandas._libs.parsers.TextReader.read
File "pandas_libsparsers.pyx", line 894, in pandas._libs.parsers.TextReader._read_low_memory
File "pandas_libsparsers.pyx", line 948, in pandas._libs.parsers.TextReader._read_rows
File "pandas_libsparsers.pyx", line 935, in pandas._libs.parsers.TextReader._tokenize_rows
File "pandas_libsparsers.pyx", line 2130, in pandas._libs.parsers.raise_parser_error
pandas.errors.ParserError: Error tokenizing data. C error: Unknown error in IO callback

此错误的可能原因是什么?会不会与内存有关?如何对此进行故障排除?我应该对这些数据进行分块吗?

我不怀疑 RAM 问题,因为在调用函数时我有超过 7 GB 的未使用内存,如我的 Windows 10 任务管理器性能监视器所示。此外,我无法提供基础数据的任何示例,因为它是运行状况和 PII 数据。

以下是我的代码摘录:

import os
import pandas as pd
# File
filepath = "C:\Temp\datafile.txt"
filename_w_ext = "datafile.txt"
# Read in TXT file
def read_in_files(filepath, filename_w_ext):
filename, file_ext = os.path.splitext(filename_w_ext)
print('Reading in file {}'.format(filepath))
with open(filepath, "r", newline='') as file:
global df_data
# Here's where it errors:
df_data = pd.read_csv(file, dtype=str, sep='|')
return df_data.columns.values.tolist(), df_data.values.tolist()

谷歌搜索这个特定的错误只会给出熊猫分词器代码中错误处理的源代码

第 583-612 行:

static int parser_buffer_bytes(parser_t *self, size_t nbytes) {
int status;
size_t bytes_read;
status = 0;
self->datapos = 0;
self->data = self->cb_io(self->source, nbytes, &bytes_read, &status);
TRACE((
"parser_buffer_bytes self->cb_io: nbytes=%zu, datalen: %d, status=%dn",
nbytes, bytes_read, status));
self->datalen = bytes_read;
if (status != REACHED_EOF && self->data == NULL) {
int64_t bufsize = 200;
self->error_msg = (char *)malloc(bufsize);
if (status == CALLING_READ_FAILED) {
snprintf(self->error_msg, bufsize,
"Calling read(nbytes) on source failed. "
"Try engine='python'.");
} else {
snprintf(self->error_msg, bufsize, "Unknown error in IO callback");
}
return -1;
}
TRACE(("datalen: %dn", self->datalen));
return status;
}

在更强大的服务器上进行测试后,我现在意识到此错误显然是由于文件需要 25 到 35 GB 的可用 RAM 对于我的 4 GB 文件,其中包含 114 列。这实际上应该会引发内存不足错误,但我想 RAM 中的递增量超过了 Tokenizer 代码检查内存不足的程度的能力。

相关内容

最新更新