NodeJS - 逐行同步读取缓冲区 => toString() 失败



我一直在挣扎,寻找了很长时间。我知道对此有答案,但没有一个有效。

为此,我使用了fs.createReadStream和readLine。但它使用 fs.close(( 来关闭文件读取。因此,在缓冲区上使用时根本不起作用。对所有文件的读取都在继续,不可能中断它......

然后我用了这个:

const stream = require('stream');
let bufferStream = new stream.PassThrough();
bufferStream.end(hexaviaFile.buffer);
bufferStream
.pipe(require('split')())
.pipe(es.mapSync(function(line){
// pause the readstream
bufferStream.pause();
// DO WHATEVER WITH YOUR LINE
console.log('line content = ' + line);
// resume the readstream, possibly from a callback
bufferStream.resume();
}).on('error', function(err){
console.log('Error while reading file.' + err);
}).on('end', function(){
console.log('end event !');
}).on('close', function(){
console.log('close event !');
})
);
// toString() Failed

我收到 [toString(( 失败] 错误并搜索了它,显然当缓冲区大于节点缓冲区最大大小时会出现它。

所以我检查了:

var buffer = require('buffer');
console.log('buffer.kMaxLength = ',  buffer.kMaxLength); // 2147483647
console.log('hexaviaFile.buffer.byteLength = ',  hexaviaFile.buffer.byteLength); // => 413567671

事实并非如此,因为您可以看到提供的数字:
* max缓冲区大小 = 2Go* 我的缓冲区 = 0.4Go

我也尝试了一些不同的库来做到这一点,但是:
1.我想保持尽可能低的内存使用量
2.我需要这个读数完美同步。换句话说,我在文件读取后有一些处理,我需要在进入下一步之前完成所有读取。

我不知道该怎么办:)感谢任何种类的帮助

问候。

我忘记了这篇文章。我找到了一种没有错误地实现这一目标的方法。

这里给出:https://github.com/request/request/issues/2826

第一个创建拆分器来读取字符串块

class Splitter extends Transform {
constructor(options){
super(options);
this.splitSize = options.splitSize;
this.buffer = Buffer.alloc(0);
this.continueThis = true;
}
stopIt() {
this.continueThis = false;
}
_transform(chunk, encoding, cb){
this.buffer = Buffer.concat([this.buffer, chunk]);
while ((this.buffer.length > this.splitSize || this.buffer.length === 1) && this.continueThis){
try {
let chunk = this.buffer.slice(0, this.splitSize);
this.push(chunk);
this.buffer = this.buffer.slice(this.splitSize);
if (this.buffer[0] === 26){
console.log('EOF : ' + this.buffer[0]);
}
} catch (err) {
console.log('ERR OCCURED => ', err);
break;
}
}
console.log('WHILE FINISHED');
cb();
}
}

然后将其管道传输到您的流:

let bufferStream = new stream.PassThrough();
bufferStream.end(hugeBuffer);
let splitter = new Splitter({splitSize : 170}); // In my case I have 170 length lines, so I want to process them line by line
let lineNr = 0;
bufferStream
.pipe(splitter)
.on('data', async function(line){
line = line.toString().trim();
splitter.pause(); // pause stream so you can perform long time processing with await
lineNr++;
if (lineNr === 1){
// DO stuff with 1st line
} else {
splitter.stopIt(); // Break the stream and stop reading so we just read 1st line
}
splitter.resume() // resumestream so you can process next chunk
}).on('error', function(err){
console.log('Error while reading file.' + err);
// whatever
}).on('end', async function(){
console.log('end event');
// Stream has ended, do whatever...
});

最新更新