-
Notifications
You must be signed in to change notification settings - Fork 297
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Slow performance on latest release in Docker #2098
Comments
Hi @Narzhan, thanks for the report. Which Docker images do you use in particular? Is "only" |
Hey @Narzhan :) Thanks for the report! We've updated the docker image |
Hi, While starting the bot via The processing of messages (the 'runtime') was slower compared to the base version ( |
Hey, its me again :) In the past couple of days, we investigated & hunted down the performance issue. So thats the outcome so far, described below. Startup performance Runtime performance To answer your question @Narzhan |
The harmonization file had been loaded within each new report, this nearly kills the CPU, because it has to open the file, parse the file & load it in the memory. Fixes #2098 Signed-off-by: Sebastian Waldbauer <[email protected]>
The harmonization file had been loaded within each new report, this nearly kills the CPU, because it has to open the file, parse the file & load it in the memory. Fixes #2098 Signed-off-by: Sebastian Waldbauer <[email protected]>
The harmonization file had been loaded within each new report, this nearly kills the CPU, because it has to open the file, parse the file & load it in the memory. Fixes #2098 Signed-off-by: Sebastian Waldbauer <[email protected]>
As of now each new message re-loads the actual harmonization conf, which unfortuneatly results in a high CPU drain. This commit adds the harmonization parameter to the function call, so the harmonization is loaded from the internal python memory allocated space & no more file read/write has to be done. Fixes #2098 Signed-off-by: Sebastian Waldbauer <[email protected]>
We identified two issues:
We'll do a hotfix 3.0.2 release, hopefully tomorrow. |
As of now each new message re-loads the actual harmonization conf, which unfortuneatly results in a high CPU drain. This commit adds the harmonization parameter to the function call, so the harmonization is loaded from the internal python memory allocated space & no more file read/write has to be done. Fixes #2098 Signed-off-by: Sebastian Waldbauer <[email protected]>
As of now each new message re-loads the actual harmonization conf, which unfortuneatly results in a high CPU drain. This commit adds the harmonization parameter to the function call, so the harmonization is loaded from the internal python memory allocated space & no more file read/write has to be done. Fixes #2098 Signed-off-by: Sebastian Waldbauer <[email protected]>
As of now each new message re-loads the actual harmonization conf, which unfortuneatly results in a high CPU drain. This commit adds the harmonization parameter to the function call, so the harmonization is loaded from the internal python memory allocated space & no more file read/write has to be done. Fixes #2098 Signed-off-by: Sebastian Waldbauer <[email protected]>
As of now each new message re-loads the actual harmonization conf, which unfortuneatly results in a high CPU drain. This commit adds the harmonization parameter to the function call, so the harmonization is loaded from the internal python memory allocated space & no more file read/write has to be done. Fixes #2098 Signed-off-by: Sebastian Waldbauer <[email protected]>
I'll test the version |
I've tried to upgrade out intelmq setup to the major version 3.0.x and found that the whole processing was significantly slower. During troubleshooting I've found out that starting a bot and using the intelmqctl utility takes longer than on the previous version.
Setup:
I've tried to profile the run of two commands (all output is sorted by descending cumulative time:
For v2.2.3:
Listing queues
Deduplicator
V3.0.0
listing queues
Deduplicator
V3.0.1
The profiling had similar output but the time were generally better compared to the v3.0.0.
Issue
In the current state the intelmq is not usable and the deployment in production needs to be delayed. I wasn't able to pinpoint where the slowdown happens. Hope this information helps. If not. I can provide more.
The text was updated successfully, but these errors were encountered: