Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Light Rag Installation #628

Open
scalenow opened this issue Jan 13, 2025 · 21 comments
Open

Light Rag Installation #628

scalenow opened this issue Jan 13, 2025 · 21 comments
Labels
bug Something isn't working

Comments

@scalenow
Copy link

scalenow commented Jan 13, 2025

Description

hey folks
going crazy to install Light Rag .No HAIR LEFT ON HEAD
(danalysis) root@scalenowai:/miniconda3/envs/danalysis/kotaemon-app/scripts# pip check
chromadb 0.4.3 has requirement pydantic<2.0,>=1.9, but you have pydantic 2.10.5.
fastapi 0.99.1 has requirement pydantic!=1.8,!=1.8.1,<2.0.0,>=1.7.4, but you have pydantic 2.10.5.
(danalysis) root@scalenowai:
/miniconda3/envs/danalysis/kotaemon-app/scripts#

How do we resolve this ,upgrading or downgrading makes other libraries incompatible 

Light RAG gives an error while executing
regardas
Abhi

Reproduction steps

(danalysis) root@scalenowai:~/miniconda3/envs/danalysis/kotaemon-app/scripts# pip check
chromadb 0.4.3 has requirement pydantic<2.0,>=1.9, but you have pydantic 2.10.5.
fastapi 0.99.1 has requirement pydantic!=1.8,!=1.8.1,<2.0.0,>=1.7.4, but you have pydantic 2.10.5.
(danalysis) root@scalenowai:~/miniconda3/envs/danalysis/kotaemon-app/scripts#

Screenshots

![DESCRIPTION](LINK.png)

Logs

Traceback (most recent call last):
  File "/root/miniconda3/envs/danalysis/kotaemon-app/install_dir/env/lib/python3.10/site-packages/gradio/queueing.py", line 575, in process_events
    response = await route_utils.call_process_api(
  File "/root/miniconda3/envs/danalysis/kotaemon-app/install_dir/env/lib/python3.10/site-packages/gradio/route_utils.py", line 276, in call_process_api
    output = await app.get_blocks().process_api(
  File "/root/miniconda3/envs/danalysis/kotaemon-app/install_dir/env/lib/python3.10/site-packages/gradio/blocks.py", line 1923, in process_api
    result = await self.call_function(
  File "/root/miniconda3/envs/danalysis/kotaemon-app/install_dir/env/lib/python3.10/site-packages/gradio/blocks.py", line 1520, in call_function
    prediction = await utils.async_iteration(iterator)
  File "/root/miniconda3/envs/danalysis/kotaemon-app/install_dir/env/lib/python3.10/site-packages/gradio/utils.py", line 663, in async_iteration
    return await iterator.__anext__()
  File "/root/miniconda3/envs/danalysis/kotaemon-app/install_dir/env/lib/python3.10/site-packages/gradio/utils.py", line 656, in __anext__
    return await anyio.to_thread.run_sync(
  File "/root/miniconda3/envs/danalysis/kotaemon-app/install_dir/env/lib/python3.10/site-packages/anyio/to_thread.py", line 56, in run_sync
    return await get_async_backend().run_sync_in_worker_thread(
  File "/root/miniconda3/envs/danalysis/kotaemon-app/install_dir/env/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 2461, in run_sync_in_worker_thread
    return await future
  File "/root/miniconda3/envs/danalysis/kotaemon-app/install_dir/env/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 962, in run
    result = context.run(func, *args)
  File "/root/miniconda3/envs/danalysis/kotaemon-app/install_dir/env/lib/python3.10/site-packages/gradio/utils.py", line 639, in run_sync_iterator_async
    return next(iterator)
  File "/root/miniconda3/envs/danalysis/kotaemon-app/install_dir/env/lib/python3.10/site-packages/gradio/utils.py", line 801, in gen_wrapper
    response = next(iterator)
  File "/root/miniconda3/envs/danalysis/kotaemon-app/install_dir/env/lib/python3.10/site-packages/ktem/pages/chat/__init__.py", line 1048, in chat_fn
    for response in pipeline.stream(chat_input, conversation_id, chat_history):
  File "/root/miniconda3/envs/danalysis/kotaemon-app/install_dir/env/lib/python3.10/site-packages/ktem/reasoning/simple.py", line 287, in stream
    docs, infos = self.retrieve(message, history)
  File "/root/miniconda3/envs/danalysis/kotaemon-app/install_dir/env/lib/python3.10/site-packages/ktem/reasoning/simple.py", line 130, in retrieve
    retriever_docs = retriever_node(text=query)
  File "/root/miniconda3/envs/danalysis/kotaemon-app/install_dir/env/lib/python3.10/site-packages/theflow/base.py", line 1097, in __call__
    raise e from None
  File "/root/miniconda3/envs/danalysis/kotaemon-app/install_dir/env/lib/python3.10/site-packages/theflow/base.py", line 1088, in __call__
    output = self.fl.exec(func, args, kwargs)
  File "/root/miniconda3/envs/danalysis/kotaemon-app/install_dir/env/lib/python3.10/site-packages/theflow/backends/base.py", line 151, in exec
    return run(*args, **kwargs)
  File "/root/miniconda3/envs/danalysis/kotaemon-app/install_dir/env/lib/python3.10/site-packages/theflow/middleware.py", line 144, in __call__
    raise e from None
  File "/root/miniconda3/envs/danalysis/kotaemon-app/install_dir/env/lib/python3.10/site-packages/theflow/middleware.py", line 141, in __call__
    _output = self.next_call(*args, **kwargs)
  File "/root/miniconda3/envs/danalysis/kotaemon-app/install_dir/env/lib/python3.10/site-packages/theflow/middleware.py", line 117, in __call__
    return self.next_call(*args, **kwargs)
  File "/root/miniconda3/envs/danalysis/kotaemon-app/install_dir/env/lib/python3.10/site-packages/theflow/base.py", line 1017, in _runx
    return self.run(*args, **kwargs)
  File "/root/miniconda3/envs/danalysis/kotaemon-app/install_dir/env/lib/python3.10/site-packages/ktem/index/file/graph/lightrag_pipelines.py", line 448, in run
    graphrag_func, query_params = self._build_graph_search()
  File "/root/miniconda3/envs/danalysis/kotaemon-app/install_dir/env/lib/python3.10/site-packages/ktem/index/file/graph/lightrag_pipelines.py", line 387, in _build_graph_search
    llm_func, embedding_func, _, _ = get_default_models_wrapper()
  File "/root/miniconda3/envs/danalysis/kotaemon-app/install_dir/env/lib/python3.10/site-packages/ktem/index/file/graph/lightrag_pipelines.py", line 121, in get_default_models_wrapper
    embedding_func = EmbeddingFunc(
NameError: name 'EmbeddingFunc' is not defined

Browsers

No response

OS

No response

Additional information

No response

@scalenow scalenow added the bug Something isn't working label Jan 13, 2025
@scalenow
Copy link
Author

@varunsharma27 Any suggestions

@scalenow
Copy link
Author

tried downgrading upgrading now not compatible with fastapii 4.5 .Any ideas folks driving me crazy

ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
chromadb 0.4.3 requires pydantic<2.0,>=1.9, but you have pydantic 2.9.0 which is incompatible.
fastapi 0.99.1 requires pydantic!=1.8,!=1.8.1,<2.0.0,>=1.7.4, but you have pydantic 2.9.0 which is incompatible.

@scalenow
Copy link
Author

this is the only depdency needs resolution
ollama 0.4.5 requires pydantic<3.0.0,>=2.9.0, but you have pydantic 1.9.0 which is incompatible.uograding pydantic breaks other packages

@varunsharma27
Copy link
Contributor

varunsharma27 commented Jan 15, 2025

I've seen this NameError: name 'EmbeddingFunc' is not defined before I guess. There does exist an issue with incompatible dependencies. I'd recommend running pip install nano-graphrag followed by:

pip uninstall hnswlib chroma-hnswlib && pip install chroma-hnswlib

To fix this issue.

@scalenow
Copy link
Author

@varunsharma27 tried this does not work, plus the application is so,so slow ,cannot even read from a single pdf ,takes ages to summarise.

This is all happening after the upgrade,six months ago it was working like a rocket on local llm

@scalenow
Copy link
Author

@taprosoft Any recommendations

@scalenow
Copy link
Author

@varunsharma27 @taprosoft if dependencies are not sorted, the application will not work.

How are people using lightrag,nano rag, MS GRAPHRAG?

@dc13208-khilmi
Copy link

@scalenow I've encountered same problem when installing it with pip in virtual environment. The dependencies for each library not matched so you should adjust it by yourself (it takes a lot of time). I suggest using docker (this is the best practice) to use this app. Or if you insist to use pip, I think if you follow each step of the Dockerfile it also can be applicable.

@scalenow
Copy link
Author

@dc13208-khilmi tried docker as per your suggestions ,very unstable once it plotted graph of Roman empire( from text file) after that has stopped working and give errors each time you use Ligtht RAG.

Testing other applications ,which are also slow but not throwing errors.

Not sure what the upgrade has done the older version was very efficient and productive

@varunsharma27
Copy link
Contributor

@dc13208-khilmi tried docker as per your suggestions ,very unstable once it plotted graph of Roman empire( from text file) after that has stopped working and give errors each time you use Ligtht RAG.

Testing other applications ,which are also slow but not throwing errors.

Not sure what the upgrade has done the older version was very efficient and productive

What was the last version you remember being fast? Perhaps we may be able to compare identical workflows on it vs the latest release to see where the time is being spent.

This is the pip freeze of the local virtual env that I use for development (Python: 3.10.14), if that helps:

accelerate==1.2.1
aenum==3.1.15
aioboto3==13.3.0
aiobotocore==2.16.0
aiofiles==23.2.1
aiohappyeyeballs==2.4.4
aiohttp==3.11.11
aioitertools==0.12.0
aiosignal==1.3.2
alabaster==1.0.0
alembic==1.14.0
annotated-types==0.7.0
anthropic==0.43.0
anyio==4.8.0
anytree==2.12.1
APScheduler==3.11.0
arrow==1.3.0
asgiref==3.8.1
asttokens==3.0.0
async-timeout==4.0.3
asyncer==0.0.8
asyncpg==0.30.0
attrs==24.3.0
autograd==1.7.0
azure-ai-documentintelligence==1.0.0
azure-core==1.32.0
babel==2.16.0
backoff==2.2.1
bcrypt==4.2.1
beartype==0.18.5
beautifulsoup4==4.12.3
binaryornot==0.4.4
black==24.10.0
boto3==1.35.81
botocore==1.35.81
build==1.2.2.post1
cachetools==5.5.0
certifi==2024.12.14
cffi==1.17.1
cfgv==3.4.0
chardet==5.2.0
charset-normalizer==3.4.1
chroma-hnswlib==0.7.6
chromadb==0.5.16
click==8.1.8
cloudpickle==3.1.1
cohere==5.13.8
coloredlogs==15.0.1
colorlog==6.9.0
contourpy==1.3.1
cookiecutter==2.6.0
coverage==7.6.10
cryptography==42.0.8
cycler==0.12.1
dataclasses-json==0.6.7
datasets==3.2.0
decorator==5.1.1
deepdiff==8.1.1
defusedxml==0.7.1
Deprecated==1.2.15
deprecation==2.1.0
dill==0.3.8
dirtyjson==1.0.8
diskcache==5.6.3
distlib==0.3.9
distro==1.9.0
dnspython==2.7.0
docutils==0.21.2
dspy==2.5.43
dspy-ai==2.5.43
duckduckgo_search==6.1.12
durationpy==0.9
elastic-transport==8.17.0
elasticsearch==8.13.2
email_validator==2.2.0
emoji==2.14.0
et_xmlfile==2.0.0
exceptiongroup==1.2.2
executing==2.1.0
fast-langdetect==0.2.4
fastapi==0.111.1
fastapi-cli==0.0.7
fastapi-sso==0.10.0
fastavro==1.10.0
fastembed==0.5.0
fasttext-predict==0.9.2.4
ffmpy==0.5.0
filelock==3.16.1
filetype==1.2.0
flake8==7.1.1
flatbuffers==24.12.23
fonttools==4.55.3
frozenlist==1.5.0
fsspec==2024.9.0
future==1.0.0
gensim==4.3.3
google-ai-generativelanguage==0.6.6
google-api-core==2.24.0
google-api-python-client==2.159.0
google-auth==2.37.0
google-auth-httplib2==0.2.0
google-generativeai==0.7.2
googleapis-common-protos==1.66.0
googlesearch-python==1.2.5
gradio==4.39.0
gradio_client==1.1.1
graspologic==3.4.1
graspologic-native==1.2.1
greenlet==3.1.1
gremlinpython==3.7.3
grpcio==1.67.1
grpcio-status==1.62.3
grpcio-tools==1.62.3
gunicorn==22.0.0
h11==0.14.0
h2==4.1.0
hpack==4.0.0
html2text==2024.2.26
httpcore==1.0.7
httplib2==0.22.0
httptools==0.6.4
httpx==0.27.2
httpx-sse==0.4.0
huggingface-hub==0.27.1
humanfriendly==10.0
hyperframe==6.0.1
hyppo==0.4.0
identify==2.6.5
idna==3.10
imagesize==1.4.1
importlib_metadata==8.4.0
importlib_resources==6.5.2
iniconfig==2.0.0
ipython==8.31.0
isodate==0.7.2
jedi==0.19.2
Jinja2==3.1.5
jiter==0.8.2
jmespath==1.0.1
joblib==1.4.2
json_repair==0.35.0
jsonpatch==1.33
jsonpath-python==1.0.6
jsonpickle==4.0.1
jsonpointer==3.0.0
jsonschema==4.23.0
jsonschema-specifications==2024.10.1
kiwisolver==1.4.8
-e git+ssh://[email protected]/varunsharma27/kotaemon.git@701a018f7fd44e8609f60a35464351e9482da451#egg=kotaemon&subdirectory=libs/kotaemon
-e git+ssh://[email protected]/varunsharma27/kotaemon.git@701a018f7fd44e8609f60a35464351e9482da451#egg=ktem&subdirectory=libs/ktem
kubernetes==31.0.0
lancedb==0.18.0
langchain==0.2.15
langchain-anthropic==0.1.23
langchain-cohere==0.2.4
langchain-community==0.2.11
langchain-core==0.2.43
langchain-experimental==0.0.64
langchain-google-genai==1.0.10
langchain-openai==0.1.25
langchain-text-splitters==0.2.4
langdetect==1.0.9
langsmith==0.1.147
lightrag-hku==1.1.1
linkify-it-py==2.0.3
litellm==1.53.7
llama-cloud==0.1.8
llama-hub==0.0.79.post1
llama-index==0.10.68
llama-index-agent-openai==0.2.9
llama-index-cli==0.1.13
llama-index-core==0.10.68.post1
llama-index-embeddings-openai==0.1.11
llama-index-indices-managed-llama-cloud==0.2.7
llama-index-legacy==0.9.48.post4
llama-index-llms-openai==0.1.31
llama-index-multi-modal-llms-openai==0.1.9
llama-index-program-openai==0.1.7
llama-index-question-gen-openai==0.1.3
llama-index-readers-file==0.1.33
llama-index-readers-llama-parse==0.1.6
llama-index-vector-stores-chroma==0.1.10
llama-index-vector-stores-lancedb==0.1.7
llama-index-vector-stores-milvus==0.1.23
llama-index-vector-stores-qdrant==0.2.17
llama-parse==0.4.9
llama_cpp_python==0.2.7
llvmlite==0.43.0
loguru==0.7.3
lxml==5.3.0
magicattr==0.1.6
Mako==1.3.8
Markdown==3.7
markdown-it-py==3.0.0
MarkupSafe==2.1.5
marshmallow==3.25.1
matplotlib==3.10.0
matplotlib-inline==0.1.7
mccabe==0.7.0
mdit-py-plugins==0.4.2
mdurl==0.1.2
milvus-lite==2.4.11
mmh3==4.1.0
monotonic==1.6
mpmath==1.3.0
multidict==6.1.0
multiprocess==0.70.16
mypy-extensions==1.0.0
nano-graphrag==0.0.8.2
nano-vectordb==0.0.4.3
neo4j==5.27.0
nest-asyncio==1.6.0
networkx==3.4.2
nltk==3.9.1
nodeenv==1.9.1
numba==0.60.0
numpy==1.26.4
oauthlib==3.2.2
olefile==0.47
ollama==0.4.6
onnx==1.17.0
onnxruntime==1.19.2
openai==1.59.7
openpyxl==3.1.5
opentelemetry-api==1.27.0
opentelemetry-exporter-otlp-proto-common==1.27.0
opentelemetry-exporter-otlp-proto-grpc==1.27.0
opentelemetry-instrumentation==0.48b0
opentelemetry-instrumentation-asgi==0.48b0
opentelemetry-instrumentation-fastapi==0.48b0
opentelemetry-proto==1.27.0
opentelemetry-sdk==1.27.0
opentelemetry-semantic-conventions==0.48b0
opentelemetry-util-http==0.48b0
optuna==4.1.0
oracledb==2.5.1
orderly-set==5.2.3
orjson==3.10.14
overrides==7.7.0
packaging==24.2
pandas==2.2.3
parameterized==0.9.0
parso==0.8.4
pathspec==0.12.1
patsy==1.0.1
pexpect==4.9.0
pillow==10.4.0
platformdirs==4.3.6
plotly==5.24.1
pluggy==1.5.0
portalocker==2.10.1
posthog==3.8.3
POT==0.9.5
pre_commit==4.0.1
prompt_toolkit==3.0.48
propcache==0.2.1
proto-plus==1.25.0
protobuf==4.25.5
psutil==6.1.1
psycopg==3.2.3
psycopg-binary==3.2.3
psycopg-pool==3.2.4
ptyprocess==0.7.0
pure_eval==0.2.3
py_rust_stemmers==0.1.3
pyaml==23.12.0
pyarrow==18.1.0
pyasn1==0.6.1
pyasn1_modules==0.4.1
pycodestyle==2.12.1
pycparser==2.22
pydantic==2.10.5
pydantic_core==2.27.2
pydub==0.25.1
pyflakes==3.2.0
Pygments==2.19.1
PyJWT==2.10.1
pylance==0.22.0
pymilvus==2.5.3
pymongo==4.10.1
PyMuPDF==1.24.11
PyMySQL==1.1.1
PyNaCl==1.5.0
pynndescent==0.5.13
pyparsing==3.2.1
pypdf==4.2.0
PyPika==0.48.9
pyproject_hooks==1.2.0
pyreqwest_impersonate==0.5.3
pytest==8.3.4
pytest-mock==3.14.0
python-dateutil==2.9.0.post0
python-decouple==3.8
python-docx==1.1.2
python-dotenv==1.0.1
python-iso639==2024.10.22
python-magic==0.4.27
python-multipart==0.0.12
python-oxmsg==0.0.1
python-slugify==8.0.4
pytz==2024.2
pyvis==0.3.2
PyYAML==6.0.2
qdrant-client==1.12.2
RapidFuzz==3.11.0
redis==5.2.1
referencing==0.35.1
regex==2024.11.6
requests==2.32.3
requests-oauthlib==2.0.0
requests-toolbelt==1.0.0
retrying==1.3.4
rich==13.9.4
rich-toolkit==0.13.2
robust-downloader==0.0.2
rpds-py==0.22.3
rq==2.1.0
rsa==4.9
ruff==0.9.1
s3transfer==0.10.4
safetensors==0.5.2
scikit-learn==1.6.1
scipy==1.12.0
seaborn==0.13.2
semantic-version==2.10.0
sentence-transformers==3.3.1
shellingham==1.5.4
six==1.17.0
smart-open==7.1.0
sniffio==1.3.1
snowballstemmer==2.2.0
soupsieve==2.6
Sphinx==8.1.3
sphinxcontrib-applehelp==2.0.0
sphinxcontrib-devhelp==2.0.0
sphinxcontrib-htmlhelp==2.1.0
sphinxcontrib-jsmath==1.0.1
sphinxcontrib-qthelp==2.0.0
sphinxcontrib-serializinghtml==2.0.0
SQLAlchemy==2.0.37
sqlmodel==0.0.22
stack-data==0.6.3
starlette==0.37.2
statsmodels==0.14.4
striprtf==0.0.26
sympy==1.13.3
tabulate==0.9.0
tantivy==0.22.0
tavily-python==0.5.0
tenacity==8.2.3
text-unidecode==1.3
textual==1.0.0
theflow==0.8.6
threadpoolctl==3.5.0
tiktoken==0.8.0
tokenizers==0.21.0
tomli==2.2.1
tomlkit==0.12.0
torch==2.2.2
tqdm==4.67.1
traitlets==5.14.3
transformers==4.48.0
trogon==0.5.0
typer==0.15.1
types-python-dateutil==2.9.0.20241206
types-requests==2.32.0.20241016
typing-inspect==0.9.0
typing_extensions==4.12.2
tzdata==2024.2
tzlocal==5.2
uc-micro-py==1.0.3
ujson==5.10.0
umap-learn==0.5.5
unstructured==0.15.14
unstructured-client==0.25.9
uritemplate==4.1.1
urllib3==2.3.0
uvicorn==0.22.0
uvloop==0.21.0
virtualenv==20.28.1
watchfiles==1.0.4
wcwidth==0.2.13
websocket-client==1.8.0
websockets==11.0.3
wikipedia==1.4.0
wrapt==1.17.2
xxhash==3.5.0
yarl==1.18.3
zipp==3.21.0

@scalenow
Copy link
Author

@varunsharma27 do not remember the version but I had downloaded in July 2024.

@dc13208-khilmi
Copy link

@dc13208-khilmi tried docker as per your suggestions ,very unstable once it plotted graph of Roman empire( from text file) after that has stopped working and give errors each time you use Ligtht RAG.

Could you give us what was the error message? @scalenow
I do not have any error related to LightRAG except OpenAI related API error (timeout) which caused by fast request from LightRAG.
I adjusted the rpm setting, and it solved the problem.

@gutama
Copy link

gutama commented Jan 21, 2025

@dc13208-khilmi tried docker as per your suggestions ,very unstable once it plotted graph of Roman empire( from text file) after that has stopped working and give errors each time you use Ligtht RAG.

Could you give us what was the error message? @scalenow I do not have any error related to LightRAG except OpenAI related API error (timeout) which caused by fast request from LightRAG. I adjusted the rpm setting, and it solved the problem.

how to adjust the rpm setting, I still cannot use gpt-4o-mini

ta/docstore/index_3.lance, it will be created
indexing step took 0.11965346336364746
GraphRAG embedding dim 3072
Indexing GraphRAG with LLM ChatOpenAI(api_key=sk-FKUCbQyYtEDR..., base_url=https://api.ope..., frequency_penalty=None, logit_bias=None, logprobs=None, max_retries=None, max_retries_=2, max_tokens=None, model=gpt-4o-mini, n=1, organization=None, presence_penalty=None, stop=None, temperature=None, timeout=20, tool_choice=None, tools=None, top_logprobs=None, top_p=None) and Embedding OpenAIEmbeddings(api_key=sk-FKUCbQyYtEDR..., base_url=https://api.ope..., context_length=8191, dimensions=None, max_retries=None, max_retries_=2, model=text-embedding-..., organization=None, timeout=10)...
Generating embeddings: 100%|███████████████████████████████████████████████████████████| 4/4 [00:14<00:00, 3.73s/batch]
2025-01-21T02:22:37.772412Z [error ] Failed to process document doc-32435f31e42714fe72ace9541cc7f785: '\nt\nu\np\nl\ne\n_\nd\ne\nl\ni\nm\ni\nt\ne\nr\n'
Traceback (most recent call last):
File "/home/ginanjar/.local/lib/python3.10/site-packages/lightrag/lightrag.py", line 463, in ainsert
raise e
File "/home/ginanjar/.local/lib/python3.10/site-packages/lightrag/lightrag.py", line 422, in ainsert
maybe_new_kg = await extract_entities(
File "/home/ginanjar/.local/lib/python3.10/site-packages/lightrag/operate.py", line 331, in extract_entities
examples = examples.format(**example_context_base)
KeyError: '\nt\nu\np\nl\ne\n_\nd\ne\nl\ni\nm\ni\nt\ne\nr\n'
asctime=2025-01-21 09:22:37,772 lineno=469 message=Failed to process document doc-32435f31e42714fe72ace9541cc7f785: '\nt\nu\np\nl\ne\n_\nd\ne\nl\ni\nm\ni\nt\ne\nr\n'
Traceback (most recent call last):
File "/home/ginanjar/.local/lib/python3.10/site-packages/lightrag/lightrag.py", line 463, in ainsert
raise e
File "/home/ginanjar/.local/lib/python3.10/site-packages/lightrag/lightrag.py", line 422, in ainsert
maybe_new_kg = await extract_entities(
File "/home/ginanjar/.local/lib/python3.10/site-packages/lightrag/operate.py", line 331, in extract_entities
examples = examples.format(**example_context_base)
KeyError: '\nt\nu\np\nl\ne\n_\nd\ne\nl\ni\nm\ni\nt\ne\nr\n'
module=lightrag

and gpt-4o working well, but very expensive

@scalenow
Copy link
Author

@dc13208-khilmi tried docker as per your suggestions ,very unstable once it plotted graph of Roman empire( from text file) after that has stopped working and give errors each time you use Ligtht RAG.

Could you give us what was the error message? @scalenow I do not have any error related to LightRAG except OpenAI related API error (timeout) which caused by fast request from LightRAG. I adjusted the rpm setting, and it solved the problem.

@dc13208-khilmi @varunsharma27 I have given up on kotaemon and evaluating other applications.Other applications are showing promising results .Very sad to leave kotaemon

@soichisumi
Copy link

Could you please let me know what tools you are trying? I'm having a similar problem. @scalenow
graphrag and its derivations are promising and we obviously need a gui tool to interact with it.

@scalenow
Copy link
Author

@soichisumi sure currently under evaluation,verba,openwebui,diffy,anything llm

Will boil down to one of these soon

@starascendin
Copy link

thanks @soichisumi , i'm running into issues w/ this repo as well

@CaMi1le
Copy link

CaMi1le commented Jan 26, 2025

Same issue. Has try ways above, not working
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. kotaemon 0.9.11 requires umap-learn==0.5.5, but you have umap-learn 0.5.7 which is incompatible. ktem 0.9.11 requires python-multipart==0.0.12, but you have python-multipart 0.0.9 which is incompatible. Successfully installed Mako-1.3.8 PyJWT-2.10.1 alembic-1.14.1 apscheduler-3.11.0 asyncer-0.0.8 cloudpickle-3.1.1 cryptography-42.0.8 datasets-3.2.0 dill-0.3.8 dspy-2.5.43 dspy-ai-2.5.43 email_validator-2.2.0 fastapi-0.111.1 fastapi-cli-0.0.7 fastapi-sso-0.10.0 fsspec-2024.9.0 future-1.0.0 gunicorn-22.0.0 hnswlib-0.8.0 httpx-0.27.2 json-repair-0.35.0 jsonschema-4.23.0 jsonschema-specifications-2024.10.1 litellm-1.53.7 magicattr-0.1.6 multiprocess-0.70.16 nano-graphrag-0.0.8.2 optuna-4.2.0 pynacl-1.5.0 python-multipart-0.0.9 referencing-0.36.2 rich-toolkit-0.13.2 rpds-py-0.22.3 rq-2.1.0 starlette-0.37.2 uvicorn-0.22.0

I will try downgrade manually, but I am not sure if this will work.

@soichisumi
Copy link

How about to add capability to connect external lightrag instance? It should resolve this issue and avoid future similar problem.
If new issue should be created, I will. @starascendin @varunsharma27 @taprosoft

@taprosoft
Copy link
Collaborator

@soichisumi moving LightRAG to a separated service is a good idea. Will consider this in some future updates. However, keep in mind that you moving the setup problem to another service and it might bring more complexity for automated install like current setup scripts / Docker image.

@scheckley
Copy link

scheckley commented Feb 24, 2025

I'm also experiencing API timeout error with LightRAG. Was there any solution or recommended package versions for compatibility? Similarly, nano graphrag times out attempting to hit the LLM endpoint: openai._base_client:Retrying request to /chat/completions.

thanks.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

9 participants