WASM 后端 tensorflowjs 在 Reactjs 中抛出"Unhandled Rejection (RuntimeError): index out of bounds"错误



我正在尝试在react应用程序中为blazeface人脸检测模型设置WASM后端。尽管使用vanillajs的演示可以运行数小时而没有任何错误,但在react中,它抛出了";未处理的拒绝(RuntimeError(:索引越界错误";在使凸轮打开超过3-5分钟之后。

整个应用程序因此错误而崩溃。从下面的错误日志来看,可能与disposeData()disposeTensor()函数有关,我猜它们与垃圾收集有关。但我不知道这是否是WASM库本身的错误。你知道为什么会发生这种事吗?

下面我也提供了我的渲染预测函数。

renderPrediction = async () => {
const model = await blazeface.load({ maxFaces: 1, scoreThreshold: 0.95 });
if (this.play) {
const canvas = this.refCanvas.current;
const ctx = canvas.getContext("2d");
const returnTensors = false;
const flipHorizontal = true;
const annotateBoxes = true;
const predictions = await model.estimateFaces(
this.refVideo.current,
returnTensors,
flipHorizontal,
annotateBoxes
);
if (predictions.length > 0) {
ctx.clearRect(0, 0, canvas.width, canvas.height);
for (let i = 0; i < predictions.length; i++) {
if (returnTensors) {
predictions[i].topLeft = predictions[i].topLeft.arraySync();
predictions[i].bottomRight = predictions[i].bottomRight.arraySync();
if (annotateBoxes) {
predictions[i].landmarks = predictions[i].landmarks.arraySync();
}
}
try {
} catch (err) {
console.log(err.message);
}
const start = predictions[i].topLeft;
const end = predictions[i].bottomRight;
const size = [end[0] - start[0], end[1] - start[1]];


if (annotateBoxes) {
const landmarks = predictions[i].landmarks;
ctx.fillStyle = "blue";
for (let j = 0; j < landmarks.length; j++) {
const x = landmarks[j][0];
//console.log(typeof x) // number
const y = landmarks[j][1];
ctx.fillRect(x, y, 5, 5);
}
}
}
}
requestAnimationFrame(this.renderPrediction);
}
};

错误的完整日志:

Unhandled Rejection (RuntimeError): index out of bounds
(anonymous function)
unknown
./node_modules/@tensorflow/tfjs-backend-wasm/dist/tf-backend-wasm.esm.js/</tt</r</r._dispose_data
C:/Users/osman.cakir/Documents/osmancakirio/deepLearning/wasm-out/tfjs-backend-wasm.js:9

disposeData
C:/Users/osman.cakir/Documents/osmancakirio/deepLearning/src/backend_wasm.ts:115
112 | 
113 | disposeData(dataId: DataId) {
114 |   const data = this.dataIdMap.get(dataId);
> 115 |   this.wasm._free(data.memoryOffset);
| ^  116 |   this.wasm.tfjs.disposeData(data.id);
117 |   this.dataIdMap.delete(dataId);
118 | }
disposeTensor
C:/Users/osman.cakir/Documents/osmancakirio/deepLearning/src/engine.ts:838
835 |     'tensors');
836 | let res;
837 | const inputMap = {};
> 838 | inputs.forEach((input, i) => {
| ^  839 |     inputMap[i] = input;
840 | });
841 | return this.runKernelFunc((_, save) => {
dispose
C:/Users/osman.cakir/Documents/osmancakirio/deepLearning/src/tensor.ts:388
endScope
C:/Users/osman.cakir/Documents/osmancakirio/deepLearning/src/engine.ts:983
tidy/<
C:/Users/osman.cakir/Documents/osmancakirio/deepLearning/src/engine.ts:431
428 | if (kernel != null) {
429 |     kernelFunc = () => {
430 |         const numDataIdsBefore = this.backend.numDataIds();
> 431 |         out = kernel.kernelFunc({ inputs, attrs, backend: this.backend });
| ^  432 |         const outInfos = Array.isArray(out) ? out : [out];
433 |         if (this.shouldCheckForMemLeaks()) {
434 |             this.checkKernelForMemLeak(kernelName, numDataIdsBefore, outInfos);
scopedRun
C:/Users/osman.cakir/Documents/osmancakirio/deepLearning/src/engine.ts:448
445 | // inputsToSave and outputsToSave. Currently this is the set of ops
446 | // with kernel support in the WASM backend. Once those ops and
447 | // respective gradients are modularised we can remove this path.
> 448 | if (outputsToSave == null) {
| ^  449 |     outputsToSave = [];
450 | }
451 | const outsToSave = outTensors.filter((_, i) => outputsToSave[i]);
tidy
C:/Users/osman.cakir/Documents/osmancakirio/deepLearning/src/engine.ts:431
428 | if (kernel != null) {
429 |     kernelFunc = () => {
430 |         const numDataIdsBefore = this.backend.numDataIds();
> 431 |         out = kernel.kernelFunc({ inputs, attrs, backend: this.backend });
| ^  432 |         const outInfos = Array.isArray(out) ? out : [out];
433 |         if (this.shouldCheckForMemLeaks()) {
434 |             this.checkKernelForMemLeak(kernelName, numDataIdsBefore, outInfos);
tidy
C:/Users/osman.cakir/Documents/osmancakirio/deepLearning/src/globals.ts:190
187 |     const tensors = getTensorsInContainer(container);
188 |     tensors.forEach(tensor => tensor.dispose());
189 | }
> 190 | /**
191 |  * Keeps a `tf.Tensor` generated inside a `tf.tidy` from being disposed
192 |  * automatically.
193 |  */
estimateFaces
C:/Users/osman.cakir/Documents/osmancakirio/deepLearning/blazeface_reactjs/node_modules/@tensorflow-models/blazeface/dist/blazeface.esm.js:17
Camera/this.renderPrediction
C:/Users/osman.cakir/Documents/osmancakirio/deepLearning/blazeface_reactjs/src/Camera.js:148
145 | const returnTensors = false;
146 | const flipHorizontal = true;
147 | const annotateBoxes = true;
> 148 | const predictions = await model.estimateFaces(
| ^  149 |   this.refVideo.current,
150 |   returnTensors,
151 |   flipHorizontal,
async*Camera/this.renderPrediction
C:/Users/osman.cakir/Documents/osmancakirio/deepLearning/blazeface_reactjs/src/Camera.js:399
396 |         // }
397 |       }
398 |     }
> 399 |     requestAnimationFrame(this.renderPrediction);
| ^  400 |   }
401 | };
402 | 

使用张量进行预测后,您需要从设备内存中释放张量,否则它会堆积起来,并导致潜在的错误。这可以简单地使用tf.dispose()手动指定要处理张量的位置。你在张量上做了预测之后就可以做了。

const predictions = await model.estimateFaces(
this.refVideo.current,
returnTensors,
flipHorizontal,
annotateBoxes
);

tf.dispose(this.refVideo.current);          

您也可以使用tf.tidy(),它会自动为您执行此操作。有了它,你可以把函数包装在处理图像张量的地方,以便对其进行预测。github上的这个问题很好地解决了它,但我对它的实现不太确定,因为它只能用于同步函数调用。

或者你可以把处理图像张量的代码包装在下面的代码中,这也会清理任何未使用的张量

tf.engine().startScope()
// handling image tensors function
tf.engine().endScope()

相关内容

最新更新