为什么选择亚马逊。S3 getObject().createReadStream() 不返回数据,而 getObject().promise() 返回?



我创建了一个Lambda函数,每次向我的bucket中添加文件时都会调用该函数。我想让我的Lambda函数从bucket中读取作为蒸汽添加的文件,并将其保存在另一个bucket中,所以我用NodeJS编写了一个使用aws-sdk的代码:

...
try {
const file = s3 
.getObject({ Bucket: srcBucket, Key: srcKey })
.createReadStream();
console.log(file);
const destParams = {
Bucket: dstBucket,
Key: dstKey,
Body: file,
};
await s3.putObject(destParams).promise();
} catch (e) {
console.log(e);
}

我通过将一个文件上传到bucket来测试我的代码,在调用函数后,我检查了日志,发现了这个错误:

INFO    Error: Cannot determine length of [object Object]

所以我检查了文件变量的console.log,看到了那个对象:

2021-04-02T08:52:40.642Z    3641187f-4d79-44e1-89f4-45c7e1967816    INFO    PassThrough {
_readableState: ReadableState {
objectMode: false,
highWaterMark: 16384,
buffer: BufferList { head: null, tail: null, length: 0 },
length: 0,
pipes: [],
flowing: null,
ended: false,
endEmitted: false,
reading: false,
sync: false,
needReadable: false,
emittedReadable: false,
readableListening: false,
resumeScheduled: false,
errorEmitted: false,
emitClose: true,
autoDestroy: true,
destroyed: false,
errored: null,
closed: false,
closeEmitted: false,
defaultEncoding: 'utf8',
awaitDrainWriters: null,
multiAwaitDrain: false,
readingMore: false,
decoder: null,
encoding: null,
[Symbol(kPaused)]: null
},
_events: [Object: null prototype] { prefinish: [Function: prefinish] },
_eventsCount: 1,
_maxListeners: undefined,
_writableState: WritableState {
objectMode: false,
highWaterMark: 16384,
finalCalled: false,
needDrain: false,
ending: false,
ended: false,
finished: false,
destroyed: false,
decodeStrings: true,
defaultEncoding: 'utf8',
length: 0,
writing: false,
corked: 0,
sync: true,
bufferProcessing: false,
onwrite: [Function: bound onwrite],
writecb: null,
writelen: 0,
afterWriteTickInfo: null,
buffered: [],
bufferedIndex: 0,
allBuffers: true,
allNoop: true,
pendingcb: 0,
prefinished: false,
errorEmitted: false,
emitClose: true,
autoDestroy: true,
errored: null,
closed: false
},
allowHalfOpen: true,
[Symbol(kCapture)]: false,
[Symbol(kTransformState)]: {
afterTransform: [Function: bound afterTransform],
needTransform: false,
transforming: false,
writecb: null,
writechunk: null,
writeencoding: null
}
}

然后,为了调试,我尝试在没有蒸汽的情况下进行调试,所以我没有使用createReadStream((,而是使用了.procise((:

try {
const file = await s3 
.getObject({ Bucket: srcBucket, Key: srcKey })
.promise()
console.log(file);
const destParams = {
Bucket: dstBucket,
Key: dstKey,
Body: file.Body,
};
await s3.putObject(destParams).promise();
} catch (e) {
console.log(e);
}

这样做效果很好。

我不知道createReadStream函数出了什么问题。你知道吗?

尝试使用s3.upload将流上传到s3。它将处理文件的分块,并在需要时在后台使用多部分上传。

此外,如果您只是将文件从S3复制到S3,请考虑使用S3复制或S3.copyObject。

最新更新