Skip to main content

Questions tagged [large-language-model]

A general tag for large language model (LLM)-related subjects. Please ALWAYS use the more specific tags if available (GPT variants, PaLM , LLaMa, BLOOM, Claude etc..)

large-language-model
0 votes
0 answers
10 views

The relationship between chunk_size, context length and embedding length in a Langchain RAG Framework

everyone. Currently I am working on a Langchain RAG framework using Ollama. I have a question towards the chunk size in the Document Splitter. Now I decide to use qwen2:72b model as both embedding ...
Joesf.Albert's user avatar
-1 votes
0 answers
11 views

ChatBot for PDFs

I just got into my first internship and the company wants to build a program that is fed up with daily pdfs with hundreds of pages and is capable of answering questions based on that. The program is ...
Daniel Alpeñes De Lucca's user avatar
0 votes
0 answers
7 views

spacy-llm & SpanCat for address parsing

I'm currently developing a project to standardize and correct a dataset of inconsistently formatted addresses using spaCy-LLM and spaCy.SpanCat.v3. The goal is to train a model on examples of ...
Hammad Javaid's user avatar
-1 votes
0 answers
14 views

AssertionError: Unexpected kwargs: {'use_flash_attention_2': False}

I'm using EvolvingLMMs-Lab/lmms-eval to evaluate LLaVa model after running accelerate launch --num_processes=8 -m lmms_eval --model llava --model_args pretrained="liuhaotian/llava-v1.5-7b" ...
ahmad's user avatar
  • 41
0 votes
0 answers
15 views

SQL query is not correctly generated using langchain, nlp and llm

I have created a application which takes input question and converts it in SQL query using langchain, llm and nlp but sometimes it creating wrong query especially in the beginning following is the ...
kalpesh patil's user avatar
1 vote
0 answers
18 views

Passing Additional Information in LangChain abatch Calls

Given an abatch call for a LangChain chain, I need to pass additional information, beyond just the content, to the function so that this information is available in the callback, specifically in the ...
TantrixRobotBoy's user avatar
0 votes
0 answers
7 views

ModuleNotFoundError when importing HuggingFaceLLM from llama_index.core.llms.huggingface

I’m trying to import HuggingFaceLLM using the following line of code: from llama_index.core.llms.huggingface import HuggingFaceLLM I know that llamaindex keeps updating, and previously this import ...
Nick's user avatar
  • 343
-8 votes
1 answer
29 views

Dealing with long context prompts in mistral 7B [closed]

I am working with Mistral 7B model on Kaggle notebooks. In my case, I will pass information to the prompt and I want the model to extract the functional needs from the document. For example, I pass it ...
rayenpe12's user avatar
0 votes
0 answers
33 views

OpenWebUI + Pipelines (w/ langchain hopefully)

I'm currently at the last step of https://github.com/open-webui/pipelines, and I tried to start the server, but it says the image below as my error. I'm not sure if the server is already running nor ...
Ryan Lutz's user avatar
-1 votes
0 answers
16 views

Trying to use Llama 3 on VertexAI is throwing 400 Bad Request but the error doesn't make sense

I am trying to use Llama 3 on VertexAI to process an image and extract data from the image and put it into JSON format. I have this working with Gemini in a Jupyter Notebook hosted on Vertex, but the ...
Carlos Muentes's user avatar
0 votes
0 answers
7 views

Converting PDFs to Markdown for Higher Quality Embeddings with Langchain.js

I am working on RAG LLM projects with Langchain.js using Node.js. Most of the data I retrieve are PDFs and a bit of JSON. For higher quality, I would like to convert my PDFs into Markdown before ...
Uiyoung Kim's user avatar
0 votes
0 answers
9 views

I am getting this error while building a RAG model

I am getting this error while building a RAG model while using qwen2 model instead of the default llama2 which chroma uses. My code: from langchain_community.embeddings import OllamaEmbeddings from ...
Dakshi R's user avatar
-2 votes
0 answers
24 views

GPT4ALL not working after installation on Windows Version 10.0.22631.3737 Ryzen5 processor [closed]

I want to run the GPT4All chatbot locally on my laptop. I have cloned the github repository in the directory I made for GPT4All. All the files and folders are downloaded and installed properly. The ....
Nandini Dasgupta's user avatar
0 votes
0 answers
29 views

How to fix this error: KeyError: 'model.embed_tokens.weight'

This is the detailed error: Traceback (most recent call last): File "/home/cyq/zxc/SmartEdit/train/DS_MLLMSD11_train.py", line 769, in <module> train() File "/home/cyq/zxc/...
hshsh's user avatar
  • 11
0 votes
0 answers
17 views

Glue job with Bedrock not running in parallel

I am writing a Glue job to process a pyspark dataframe using Bedrock which was recently added to boto3. The job will get sentiment from a text field in the dataframe using one of the LLMs in Bedrock, ...
ddd's user avatar
  • 4,967

15 30 50 per page
1
2 3 4 5
102