Questions tagged [mistral-7b]
The mistral-7b tag has no usage guidance.
mistral-7b
72
questions
0
votes
0
answers
19
views
Need to Implement Function calling for Mistral 7b-instruct v.02 model in Sagemaker
I trying to add function calling in my chatbot code to actually fetch the tools if the user query is related to the tool. I was trying based on the internet format but i don't know where the error is. ...
0
votes
0
answers
29
views
Script for streaming Mistral-7B LLM output only streams on server side. Client gets full output
I designed a remote server - client pipeline, which is supposed to load the model on the server and stream the output of the model.
At the moment, the output is correctly streamed, but only inside the ...
0
votes
0
answers
15
views
How to build in Mistral model into LocalAI permanently?
I would like to create a Dockerfile, in which I would run LocalAI with built in Mistral model inside.
I'm wondering, if its possible to built in Mistral model into LocalAI image permanently? Here's my ...
0
votes
1
answer
186
views
How to build in Mistral model into Ollama permanently?
I would like to create a Dockerfile, in which I would run Ollama with built in Mistral model inside. For now on, I achieved only this: when I run Ollama, it downloads Mistral in one single Dockerfile (...
1
vote
1
answer
48
views
Mistral7b response starts with an extra leading space when streamed with Ollama
When I stream the response of mistral7b LLM with Ollama, it has an extra space to the left on the very first streamed chunk. Below is my code:
import ollama
stream = ollama.chat(
model='mistral',
...
0
votes
0
answers
67
views
My LLM application in Streamlit (using python) takes longer time to generate the response
I am creating a LLM application using Ollama, Langchain, RAG and streamlit. I am using Mistral as my LLM model from Ollama. However, after uploading the PDF file in the streamlit, it take so much time ...
-1
votes
0
answers
36
views
RAG Model Error: Mistral7B is not giving correct response, when deployed locally, returns the same irrelevant response everytime
I am creating an RAG model which creates conversational chatbot for users, loading custom knowledge base which I created in the docx format.
I used Haystack instead of llama index here, and chainlit ...
0
votes
0
answers
99
views
Inference with LLava v1.6 Mistral model on Amazon SageMaker
I've deployed the following model llava-hf/llava-v1.6-mistral-7b-hf in Amazon SageMaker simply pasting deployment code from model card (https://huggingface.co/llava-hf/llava-v1.6-mistral-7b-hf). ...
1
vote
1
answer
140
views
Need clarification for a custom RAG project using Mistral 7B Instruct
I am a Langchain beginner.
I am tasked with setting up an AI assistant for an app of a fake theater, let's call it SignStage, that has two Halls A and B and each play is staged twice a day in the ...
0
votes
1
answer
103
views
Mistral7B Instruct input size limited
recently i finetuned a Mistral 7B Instruct v0.3 model and deployed it on an AWS Sagemaker endpoint. But got errors like this:
" Received client error (422) from primary with message "{"...
0
votes
1
answer
41
views
TGI does not reference model weights
My server's proxy does not allow me to go to Hugging Face. So, I downloaded Mistral 7B weights from GitHub to another computer, sftpd it over to the server, and untarred the contents,
$ tar -tvf ...
0
votes
0
answers
29
views
QLora using peft from HF and custom class for binary classification
I am fine-tuning an mistral-7B LLM model for binary classification. I realize it may be an overkill; but we are running some experiments.
So far, I have used HuggingFace libraries like peft and ...
1
vote
1
answer
215
views
Performing Function Calling with Mistral AI through Hugging Face Endpoint
I am trying to perform function calling using Mistral AI through the Hugging Face endpoint. Mistral AI requires input in a specific string format (assistant: ... \n user: ...). However, the input ...
0
votes
0
answers
89
views
How to fix RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn
I am trying to use a custom csv dataset to finetune a model: TheBloke/Mistral-7B-Instruct-v0.1-GPTQ.I performed data preprocessing and I split the dataset into train, validation and test set and then ...
0
votes
1
answer
188
views
Using Llama_index with Mistral Model
I'm new to the field of large language models (LLMs), so I apologize if my explanation isn't clear.
I have a Mistral model running in a private cloud, and I have both the URL and the model name.
URL = ...