执行下面的代码会抛出Error: Cannot create a string longer than 0x3fffffe7 characters
。我的文件只包含2个数组和composed
,整个文件包含1.3 gb。我想通过id组合它们,通过使用下面的代码来映射arrayMain,并返回每个对象与arrayItems中具有相同id的对象组合,然后重构该对象以抛出id。但我觉得我碰到了系统极限。我对处理大数据文件相当陌生,所以任何帮助都很感激。
const composed = arrayMain.map((d) => {
return {
...d,
data: arrayItems.filter(({ ID }) => d.ID === ID).map(({notNeeded, ...needed}) => needed),
};
});
如果有人想知道我的数据是如何结构的
const arrayMain = [
{
ID: 30574062,
number: 28234702,
place: London,
},
{
ID: 30574063,
number: 45232502,
place: Paris,
},
...
];
const arrayItems = [
{
"ID": 30574062,
"anotherNumber": "52,3",
"color": "red"
},
{
"ID": 30574062,
"anotherNumber": "13",
"color": "yellow"
},
{
"ID": 30574063,
"anotherNumber": "60,6",
"color": "blue"
},
...
]
//expected result
[
{
ID: 30574062,
number: 28234702,
place: London,
data: [
{
"anotherNumber": "52,3",
"color": "red"
},
{
"anotherNumber": "13",
"color": "yellow"
}
]
},
{
ID: 30574063,
number: 45232502,
place: Paris,
data: [
{
"anotherNumber": "60,6",
"color": "blue"
},
]
},
...
]
可能有一个更好看的解决方案,但也许您可以将数组字符串化,以便在块中编写,并且只生成字符串并在这些块中调用appendFile
,因此在任何时候都不会有一个大字符串。在创建composed
之后,创建一个写流(允许许多顺序写而不会出错)并继续对其调用.write
。
const stream = fs.createWriteStream(filePath, { flags:'a' });
stream.write('[');
const CHUNK_LENGTH = 500; // alter as needed
for (let i = 0; i < composed.length; i += CHUNK_LENGTH) {
const chunkStr = JSON.stringify(
composed.slice(i, i + CHUNK_LENGTH)
);
stream.write(chunkStr);
}
stream.write(']');
// to wait for all of this to complete, watch for the stream's finish event