如何从谷歌桶加载张量流冻结图模型?



当我们想使用TensorFlow加载模型时,我们这样做:

path_to _frozen = model_path + '/frozen_inference_graph.pb'
detection_graph = tf.Graph()
with detection_graph.as_default():
od_graph_def = tf.GraphDef()
with tf.io.gfile.GFile(path_to _frozen, 'rb') as fid:
serialized_graph = fid.read()
od_graph_def.ParseFromString(serialized_graph)
tf.import_graph_def(od_graph_def, name='')

我们如何使用谷歌云功能在谷歌存储桶上加载存储的模型?

def download_blob(bucket_name, source_blob_name, destination_file_name):
storage_client = storage.Client()
bucket = storage_client.get_bucket(bucket_name)
blob = bucket.blob(source_blob_name)
blob.download_to_filename(destination_file_name)

def 处理程序(请求(: download_blob(BUCKET_NAME,'redbull/output_inference_graph.pb/frozen_inference_graph.pb','/tmp/frozen_inference_graph.pb'( 打印("确定"( detection_graph = tf。图形(( 使用 detection_graph.as_default((: od_graph_def = tf。GraphDef(( 使用 tf.io.gfile.GFile('/tmp/frozen_inference_graph.pb', 'rb'( 作为 fid: serialized_graph = fid.read(( od_graph_def。ParseFromString(serialized_graph( tf.import_graph_def(od_graph_def, 名称=''(

您可以将 pb 文件存储在存储中。

然后,在您的函数中,将其下载到本地可写目录/tmp中。请记住,此目录"在内存中"。这意味着必须很好地定义分配给函数的内存,以处理应用内存占用和模型下载的文件

将你的第一行替换为这样的内容。

# Be sure that your function service account as access to the storage bucket    
storage_client = storage.Client()
bucket = storage_client.get_bucket('<bucket_name>')
blob = bucket.blob('<path/to>/frozen_inference_graph.pb')
# Download locally your pb file
path_to_frozen = '/tmp/frozen_inference_graph.pb'
blob.download_to_filename(path_to_frozen)

最新更新