-
Notifications
You must be signed in to change notification settings - Fork 4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Question]: Max token parameter didn't work. #5640
Labels
question
Further information is requested
Comments
Hiya, what's your RAGFlow version? From v0.17.0 onwards, the limit on max_token is limited. You can find from chat assistant settings max_token is disabled by default. |
1 task
KevinHuSh
added a commit
that referenced
this issue
Mar 6, 2025
cike8899
added a commit
to cike8899/ragflow
that referenced
this issue
Mar 6, 2025
1 task
cike8899
added a commit
to cike8899/ragflow
that referenced
this issue
Mar 6, 2025
1 task
KevinHuSh
pushed a commit
that referenced
this issue
Mar 7, 2025
TeslaZY
pushed a commit
to TeslaZY/ragflow
that referenced
this issue
Mar 8, 2025
…low#5728) ### What problem does this PR solve? Fix: Remove the document language parameter. infiniflow#5686 ### Type of change - [x] Bug Fix (non-breaking change which fixes an issue)
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Describe your problem
我配置的大模型是qwen2.5系列,使用ollama运行,我给他配置的上下文最大是32768.
然后创建助手,配置模型的时候,最大的token数量设置了128000,可是输出的时候,还是很短,明显没有结束。
还有哪些地方和这个输出长度有关?
The text was updated successfully, but these errors were encountered: