我已收到使用GPU的聊天机器人的响应,我收到以下错误:
运行时错误:所有Tensor应位于同一设备上,但至少找到两个设备,cuda:0和cpu!(在方法wrapper_mm中检查参数mat 2的参数时)
我试着在GPU和打印标签上运行此代码,但我得到了这个错误。
我的培训代码如下:
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') # device = cuda
with open('intents.json') as f:
intents = json.load(f)
file = 'data.pth'
data = torch.load(file)
input_size = data['input_size']
model_state = data['model_state']
output_size = data['output_size']
hidden_size = data['hidden_size']
all_words = data['all_words']
tags = data['tags']
model = NeuralNetwork(input_size,hidden_size,output_size)
model.load_state_dict(model_state)
model.eval()
@jit(target_backend='cuda')
def get_response(pattern):
sentence = tokenize(pattern)
BoW = bag_of_word(sentence,all_words)
BoW = torch.from_numpy(BoW).to(device)
output = model.forward_propagation(BoW)
# print(output)
_,predicted = torch.max(output,dim=-1)
tag = tags[predicted.item()] # give prediction tag for input speech
# print(tag)
probs = torch.softmax(output,dim=-1) # to make output probability between -1 and 1
# print(props)
prob = probs[predicted.item()] # to select the big probability
# print(prob)
return prob,tag
pattern = speech_to_text()
prob,tag = get_response(pattern)
print(tag)
1条答案
按热度按时间yvt65v4c1#
将模型
to()
移动到设备: