如何使用Django在Live服务器上运行ML模型

zzlelutf  于 2022-11-26  发布在  Go
关注(0)|答案(1)|浏览(116)

我有一个Django项目,它使用了一个公共ML模型(“deepset/roberta-base-squad 2”)来做一些预测。服务器收到一个带有参数的请求,这些参数触发了一个排队函数。这个函数就是做预测的。但是这只在我的本地工作。一旦我把我的项目推到一个实时服务器上,模型就不会开始运行,但是永远不会完成。我试过使用不同的向导来设置项目,为了避免我的项目每次提出请求时都下载ML模型,但这并不能解决问题。我不知道还能做些什么。如果需要任何额外的信息,我可以提供。
下面是我现在的设置:

查看次数.py

class BotView(GenericAPIView):
    serializer_class = BotSerializer

    def post(self, request, *args, **kwargs):
        try:
            serializer = self.serializer_class(data=request.data)
            serializer.is_valid(raise_exception=True)
            serializer.save()
            print(serializer.data)
            return Response(data=serializer.data, status=status.HTTP_200_OK)
        except Exception as e:
            print(str(e))
            return Response(data=str(e), status=status.HTTP_400_BAD_REQUEST)

序列化程序.py

from .tasks import upload_to_ai

class BotSerializer(serializers.Serializer):
    questions = serializers.ListField(required=True, write_only=True)
    user_info = serializers.CharField(required=True, write_only=True)
    merchant = serializers.CharField(required=True, write_only=True)
    user_id = serializers.IntegerField(required=True, write_only=True)
    
    def create(self, validated_data):
        # call ai and run async
        upload_to_ai.delay(validated_data['questions'], validated_data['user_info'], validated_data['merchant'], validated_data['user_id'])
        return "successful"

任务.py

from bot.apps import BotConfig
from model.QA_Model import predict

@shared_task()
def upload_to_ai(questions:list, user_info:str, merchant:str, user_id:int):
    model_predictions = predict(questions, BotConfig.MODEL, user_info)
    print(model_predictions)
    return

应用程序.py

class BotConfig(AppConfig):
    default_auto_field = 'django.db.models.BigAutoField'
    name = 'bot'
    reader = FARMReader(model_name_or_path="deepset/roberta-base-squad2", top_k=3, use_gpu=False)

    #model pipeline
    MODEL = Pipeline()
    MODEL.add_node(component= reader, name="Reader", inputs=["Query"])

质量保证模型.py

from haystack import Document
import pandas as pd

def predict(query:list, model, context):
    '''
    This function predicts the answer to question passed as query
    Arguments:
    query: This is/are the question you intend to ask
    model: This is the model for the prediction
    context: This is the data from which the model will find it's answers
    '''
    
    result = model.run_batch(queries=query,
                            documents=[Document(content=context)])
    response = convert_to_dict(result['answers'], query)
    return response

每次我发送请求时,ML模型开始运行,如图所示,但它从未超过0%。x1c 0d1x

rvpgvaaj

rvpgvaaj1#

我已经解决了这个问题。所以一直以来,我都在后台进程中使用Celery运行ML模型,但是当我在主线程上运行它时,它工作了。我还不知道为什么它不能在后台进程中运行。

相关问题