如何将RGBA_888转换为字节缓冲区以将其提供给tf lite模型



我使用camera x to imageAnalysis用例来运行tf lite模型,我得到的输出图像格式为RGBA_8888。如何将其转换为字节缓冲区以将其提供给我的ml模型。

这是安卓工作室为ml模型生成的代码:


// Creates inputs for reference.
val inputFeature0 = TensorBuffer.createFixedSize(intArrayOf(1, 224, 224, 3), DataType.FLOAT32)
inputFeature0.loadBuffer(byteBuffer)
// Runs model inference and gets result.
val outputs = model.process(inputFeature0)
val outputFeature0 = outputs.outputFeature0AsTensorBuffer
// Releases model resources if no longer used.
model.close()

我为将rgba_8888转换为字节缓冲区而编写的代码,但给出了相同的输出数据(置信度(:

imageAnalysis.setAnalyzer(ContextCompat.getMainExecutor(this)) { imageProxy ->
val bitmap = Bitmap.createBitmap(
imageProxy.width,
imageProxy.height,
Bitmap.Config.ARGB_8888
)
val img = Bitmap.createScaledBitmap(bitmap, 224, 224, false)
val model = ModelFull.newInstance(context)
val byteBuffer: ByteBuffer = ByteBuffer.allocate(4 * 224 * 224 * 3)
byteBuffer.order(ByteOrder.nativeOrder())
// get 1D array of 224 * 224 pixels in image
val intValues = IntArray(224 * 224)
img.getPixels(
intValues,
0,
img.width,
0,
0,
img.width,
img.height
)
// iterate over pixels and extract R, G, and B values. Add to bytebuffer.
var pixel = 0
for (i in 0 until 224) {
for (j in 0 until 224) {
val `val` = intValues[pixel++] // RGB
byteBuffer.putFloat((`val` shr 16 and 0xFF) * (1f / 255f))
byteBuffer.putFloat((`val` shr 8 and 0xFF) * (1f / 255f))
byteBuffer.putFloat((`val` and 0xFF) * (1f / 255f))
}
}
val inputFeature0 =
TensorBuffer.createFixedSize(intArrayOf(1, 224, 224, 3), DataType.FLOAT32)
inputFeature0.loadBuffer(byteBuffer)
//                Runs model inference and gets result .
val outputs = model.process(inputFeature0)
val outputFeature0 = outputs.outputFeature0AsTensorBuffer
val confidences = outputFeature0.floatArray
Log.d("this is my array", "arr: " + Arrays.toString(confidences))
//                Releases model resources if no longer used.
model.close()
imageProxy.close()
} ```

试试这个,对我有用。

TensorBuffer inputFeature0 = TensorBuffer.createFixedSize(new int[]{1, 
400, 600, 3}, DataType.FLOAT32);
Bitmap input=Bitmap.createScaledBitmap(bitmap,400,600,true);
TensorImage image=new TensorImage(DataType.FLOAT32);
image.load(input);
ByteBuffer byteBuffer=image.getBuffer();
inputFeature0.loadBuffer(byteBuffer);
Seeinthedark.Outputs outputs = model.process(inputFeature0);
TensorBuffer outputFeature0 = outputs.getOutputFeature0AsTensorBuffer()

最新更新