有从TensorFlow到OpenMV的工作流示例吗



我已经训练了一个基于MobileNet-V2的图像多分类模型(只添加了密集层(,并进行了全整数量化(INT8(,然后导出了model.tflite文件,使用TF class((调用该模型。

这是我量化它的代码:

import tensorflow as tf
import numpy as np
import pathlib

def representative_dataset():
for _ in range(100):
data = np.random.rand(1, 96, 96, 3)  // random tensor for test
yield [data.astype(np.float32)]

converter = tf.lite.TFLiteConverter.from_saved_model('saved_model/my_model')
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.representative_dataset = representative_dataset
tflite_quant_model = converter.convert()
tflite_models_dir = pathlib.Path("/tmp/mnist_tflite_models/")
tflite_models_dir.mkdir(exist_ok=True, parents=True)
tflite_model_quant_file = tflite_models_dir/"mnist_model_quant.tflite"
tflite_model_quant_file.write_bytes(tflite_quant_model)

该模型在训练时的测试中具有很好的准确性。然而,当在openmv上测试时,所有对象都会输出相同的标签(尽管概率略有不同(。

我查阅了一些资料,其中一篇提到TF Classify((有偏移和缩放参数,这与在训练期间将RGB值压缩为[-1,0]或[0,1]有关,但API官方文件中没有这个参数。

for obj in tf.classify(self.net , img1, min_scale=1.0, scale_mul=0.5, x_overlap=0.0, y_overlap=0.0):
print("**********nTop 1 Detections at [x=%d,y=%d,w=%d,h=%d]" % obj.rect())
sorted_list = sorted(zip(self.labels, obj.output()), key = lambda x: x[1], reverse = True)
for i in range(1):
print("%s = %f" % (sorted_list[i][0], sorted_list[i][1]))
return sorted_list[i][0]

那么,从tensorflow训练模型到部署到openmv,有没有工作流的例子?

好吧,你在大约1年前问过这个问题,但我还是来帮忙的!

如果您想为OpenMV H7 Plus训练图像分类模型,您可以查看此回购。

你也可以用这个代码推断它们:

import sensor, image, time, os, tf, uos, gc
sensor.reset()                         # Reset and initialize the sensor.
sensor.set_pixformat(sensor.RGB565)    # Set pixel format to RGB565 (or GRAYSCALE)
sensor.set_framesize(sensor.QVGA)      # Set frame size to QVGA (320x240)
sensor.set_windowing((240, 240))       # Set 240x240 window.
sensor.skip_frames(time=2000)          # Let the camera adjust.
net = None
labels = None
try:
# load the model, alloc the model file on the heap if we have at least 64K free after loading
net = tf.load("trained.tflite", load_to_fb=uos.stat('trained.tflite')[6] > (gc.mem_free() - (64*1024)))
except Exception as e:
print(e)
raise Exception('Failed to load "trained.tflite", did you copy the .tflite and labels.txt file onto the mass-storage device? (' + str(e) + ')')
try:
labels = [line.rstrip('n') for line in open("labels.txt")]
except Exception as e:
raise Exception('Failed to load "labels.txt", did you copy the .tflite and labels.txt file onto the mass-storage device? (' + str(e) + ')')
clock = time.clock()
while(True):
clock.tick()
img = sensor.snapshot()
# default settings just do one detection... change them to search the image...
for obj in net.classify(img, min_scale=1.0, scale_mul=0.8, x_overlap=0.5, y_overlap=0.5):
print("**********nPredictions at [x=%d,y=%d,w=%d,h=%d]" % obj.rect())
img.draw_rectangle(obj.rect())
# This combines the labels and confidence values into a list of tuples
predictions_list = list(zip(labels, obj.output()))
for i in range(len(predictions_list)):
print("%s = %f" % (predictions_list[i][0], predictions_list[i][1]))
print(clock.fps(), "fps")

我希望它能有所帮助!

相关内容

  • 没有找到相关文章

最新更新