我有一个经过训练的(和工作的)模型,来自“模型花园的对象检测”-https://www.tensorflow.org/tfmodels/vision/object_detection
想要将其转换为TFLite,使用export_tflite_lib.convert_tflite_model,如教程中所述。问题是生成的tflite模型具有输入大小形状[1,1,1,3],所以我不能使用它。我期待形状[1,无,无,3]或[1,256,256,3])。有什么办法吗?
python env使用:
pip install tensorflow==2.13
pip install imagesize opencv-python chardet
pip install tf-models-official # the tf-vision libs
python代码重现错误。要使用它,需要导出的模型,您可以在这里下载https://file.io/ex4yxjrRhT30
import tensorflow as tf
import tensorflow_models as tfm
from official.vision.serving import export_saved_model_lib, export_tflite_lib
import PIL.Image
import numpy as np
import os, time, json, imagesize, sys
print("TF", tf.__version__) # 2.13
MODEL_WIDTH = 256
MODEL_HEIGHT = 256
BATCH_SIZE = 1
# the trained model export folder
EXPORT_DIR = "/home/. .. . . . ./202309_tfvision/export"
# ------------------------------------------------------------------------------------
# https://www.tensorflow.org/api_docs/python/tfm/core/base_trainer/ExperimentConfig
exp_config = tfm.core.exp_factory.get_exp_config('retinanet_mobile_coco')
logical_device_names = [logical_device.name for logical_device in tf.config.list_logical_devices()]
# Backbone config.
exp_config.task.freeze_backbone = False
exp_config.task.annotation_file = ''
# Model config.
# default is 384,384?
# exp_config.task.model is a https://www.tensorflow.org/api_docs/python/tfm/hyperparams/Config
# mentions no "input_size" attribute...
exp_config.task.model.input_size = [MODEL_WIDTH, MODEL_HEIGHT, 3]
exp_config.task.model.num_classes = 3 #num_classes + 1 ?
exp_config.task.model.detection_generator.tflite_post_processing.max_classes_per_detection = exp_config.task.model.num_classes
train_steps = 1 # testes
# Adjust the trainer configuration.
if 'GPU' in ''.join(logical_device_names):
print('This may be broken in Colab.')
device = 'GPU'
elif 'TPU' in ''.join(logical_device_names):
print('This may be broken in Colab.')
device = 'TPU'
else:
print('Running on CPU is slow, so only train for a few steps.')
device = 'CPU'
exp_config.trainer.steps_per_loop = 10 # steps_per_loop = num_of_training_examples // train_batch_size
exp_config.trainer.summary_interval = 100
exp_config.trainer.checkpoint_interval = 4
exp_config.trainer.validation_interval = 8
exp_config.trainer.validation_steps = 1 # validation_steps = num_of_validation_examples // eval_batch_size
exp_config.trainer.train_steps = train_steps
exp_config.trainer.optimizer_config.warmup.linear.warmup_steps = 100
exp_config.trainer.optimizer_config.learning_rate.type = 'cosine'
exp_config.trainer.optimizer_config.learning_rate.cosine.decay_steps = train_steps
exp_config.trainer.optimizer_config.learning_rate.cosine.initial_learning_rate = 0.1
exp_config.trainer.optimizer_config.warmup.linear.warmup_learning_rate = 0.05
print(exp_config.as_dict())
# ------------------------------------------------------------------------------------
print("TFLITE")
tflite_model = export_tflite_lib.convert_tflite_model(
saved_model_dir=EXPORT_DIR,
params=exp_config,
# quant_type: The post training quantization (PTQ) method. It can be one of
# `default` (dynamic range), `fp16` (float16), `int8` (integer wih float
# fallback), `int8_full` (integer only) and None (no quantization).
quant_type=None)
interpreter = tf.lite.Interpreter(model_content=tflite_model)
interpreter.allocate_tensors()
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()
# input_details [{'name': 'serving_default_inputs:0', 'index': 0,
# 'shape': array([1, 1, 1, 3], dtype=int32),
# 'shape_signature': array([ 1, -1, -1, 3], dtype=int32),
# 'dtype': <class 'numpy.uint8'>, 'quantization': (0.0, 0), 'quantization_parameters': {'scales': array([], dtype=float32), 'zero_points': array([], dtype=int32), 'quantized_dimension': 0}, 'sparsity_parameters': {}}]
print("input_details", input_details)
# TESTE TFLITE
TEST_IMAGE_FILE = "/home/daniel/armazem/porthus/ml_data/camera/20230731_233104/poco/000165.jpg"
image = np.array(PIL.Image.open(TEST_IMAGE_FILE).resize((MODEL_WIDTH, MODEL_HEIGHT)), dtype=input_details[0]["dtype"])
# creata a batch with 1 image
images= np.expand_dims(image, axis=0)
# ERROR
# ValueError: Cannot set tensor: Dimension mismatch. Got 256 but expected 1 for dimension 1 of input 0.
interpreter.set_tensor(input_details[0]["index"], images)
interpreter.invoke()
1条答案
按热度按时间gg58donl1#
在这里找到了答案:Input images with dynamic dimensions in Tensorflow-lite
只需要在调用interpreter.allocate_tensors之前添加这一行
现在input_details形状将是[ 1,256,256,3]