tensorflow 将BERT模型转换为TFLite

gab6jxml  于 2023-06-24  发布在  其他
关注(0)|答案(4)|浏览(192)

我有这个代码的语义搜索引擎使用预先训练的bert模型。我想把这个模型转换成tflite来部署到google mlkit。我想知道如何转换它。我想知道是否有可能将其转换为Tflite。这可能是因为它在官方tensorflow网站上提到:但是我不知道从何开始
代码:

from sentence_transformers import SentenceTransformer

# Load the BERT model. Various models trained on Natural Language Inference (NLI) https://github.com/UKPLab/sentence-transformers/blob/master/docs/pretrained-models/nli-models.md and 
# Semantic Textual Similarity are available https://github.com/UKPLab/sentence-transformers/blob/master/docs/pretrained-models/sts-models.md

model = SentenceTransformer('bert-base-nli-mean-tokens')

# A corpus is a list with documents split by sentences.

sentences = ['Absence of sanity', 
             'Lack of saneness',
             'A man is eating food.',
             'A man is eating a piece of bread.',
             'The girl is carrying a baby.',
             'A man is riding a horse.',
             'A woman is playing violin.',
             'Two men pushed carts through the woods.',
             'A man is riding a white horse on an enclosed ground.',
             'A monkey is playing drums.',
             'A cheetah is running behind its prey.']

# Each sentence is encoded as a 1-D vector with 78 columns
sentence_embeddings = model.encode(sentences)

print('Sample BERT embedding vector - length', len(sentence_embeddings[0]))

print('Sample BERT embedding vector - note includes negative values', sentence_embeddings[0])

#@title Sematic Search Form

# code adapted from https://github.com/UKPLab/sentence-transformers/blob/master/examples/application_semantic_search.py

query = 'Nobody has sane thoughts' #@param {type: 'string'}

queries = [query]
query_embeddings = model.encode(queries)

# Find the closest 3 sentences of the corpus for each query sentence based on cosine similarity
number_top_matches = 3 #@param {type: "number"}

print("Semantic Search Results")

for query, query_embedding in zip(queries, query_embeddings):
    distances = scipy.spatial.distance.cdist([query_embedding], sentence_embeddings, "cosine")[0]

    results = zip(range(len(distances)), distances)
    results = sorted(results, key=lambda x: x[1])

    print("\n\n======================\n\n")
    print("Query:", query)
    print("\nTop 5 most similar sentences in corpus:")

    for idx, distance in results[0:number_top_matches]:
        print(sentences[idx].strip(), "(Cosine Score: %.4f)" % (1-distance))
4jb9z9bj

4jb9z9bj1#

首先,你需要在TensorFlow中拥有你的模型,你使用的包是用PyTorch编写的。Huggingface的Transformers有TensorFlow模型,你可以从它开始。此外,他们还为Android提供了TFLite-ready models
一般来说,你首先有一个TensorFlow模型。将其保存为SavedModel

tf.saved_model.save(pretrained_model, "/tmp/pretrained-bert/1/")

你可以在上面运行转换器。

3pmvbmvn

3pmvbmvn2#

你试过运行转换工具(tflite_convert),它有什么抱怨吗?
顺便说一句,你可能想看看TFLite团队使用Bert模型的QA示例。https://github.com/tensorflow/examples/tree/master/lite/examples/bert_qa/android

kx5bkwkv

kx5bkwkv3#

我找不到任何关于使用BERT模型在移动的上获取文档嵌入并计算k最近文档搜索的信息,就像你的例子一样。这也可能不是一个好主意,因为BERT模型执行起来可能很昂贵,并且具有大量参数,因此模型文件大小也很大(400mb+)。
然而,you can now use BERT和MobileBERT用于移动上的文本分类和问题回答。也许你可以从他们的demo app开始,它与MobileBERT tflite模型接口,正如Xunkai提到的那样。我相信在不久的将来会有更好的支持您的用例。

tsm1rwdh

tsm1rwdh4#

考虑使用Onnx或tflite https://huggingface.co/docs/optimum/exporters/tflite/usage_guides/export_a_model的官方PyTorch提取器

optimum-cli export tflite --model bert-base-uncased --sequence_length 128 bert_tflite/

bert-base-uncased替换为您在拥抱面上选择的模型旅馆。

相关问题