Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Connection to Mail server keeps timing out #1405

Closed
Hammy25 opened this issue May 9, 2019 · 13 comments
Closed

Connection to Mail server keeps timing out #1405

Hammy25 opened this issue May 9, 2019 · 13 comments
Labels

Comments

@Hammy25
Copy link

Hammy25 commented May 9, 2019

My Mail URL Fetcher collector bot keeps timing out connection. What could be the issue?

Screenshot (2)

@ghost
Copy link

ghost commented May 10, 2019

Anything. Can you exclude network issues? Is maybe a proxy necessary?

@ghost ghost added the support label May 10, 2019
@Hammy25
Copy link
Author

Hammy25 commented May 12, 2019

Network is working well.
You are suggesting I use a proxy?
Also if you wouldn't mind, I'm experiencing another problem with Redis. My Botne t(Beacause the "getnamebyhost" bots are stopped) stops when it is trying to take a screenshot. My server has a 16GB RAM. Could that be the problem? Or is my configuration not effectient enough to use the resources available?

@ghost
Copy link

ghost commented May 13, 2019

Which version of intelmq are you using? Please post the output of intelmqctl --version and intelmqctl check.

Network is working well.
You are suggesting I use a proxy?

No, I ask if a proxy is necessary. Could you please show the bots' configuration (without password) and the IMAP servers' settings?

My Botne t(Beacause the "getnamebyhost" bots are stopped) stops when it is trying to take a screenshot.

A screenshot in the Browser affects the server?

@Hammy25
Copy link
Author

Hammy25 commented May 13, 2019

I'm running IntelMQ version 1.1.2

Which version of intelmq are you using? Please post the output of intelmqctl --version and intelmqctl check.

Output of intelmqctl check:

intelmq@intelmq:/var/lib/intelmq$ intelmqctl check
intelmqctl: Reading configuration files.
intelmqctl: Checking defaults configuration.
intelmqctl: Checking runtime configuration.
intelmqctl: Checking runtime and pipeline configuration.
intelmqctl: Checking harmonization configuration.
intelmqctl: Checking for bots.
intelmqctl: /var/lib/intelmq/.local/lib/python3.5/site-packages/requests/__init__.py:83: RequestsDependencyWarning: Old version of cryptography ([1, 2, 3]) may cause slowdown.
  warnings.warn(warning, RequestsDependencyWarning)

intelmqctl: No issues found. 

Cannot do this at the moment. Maybe let me know what specific setting you want. The bot connects the first time but keeps timing out as the botnet continues running.

No, I ask if a proxy is necessary. Could you please show the bots' configuration (without password) and the IMAP servers' settings?

No. Not a screenshot of the browser. I'm talking about RDB snapshots.
It presents me with the following error: "No space left on device. Redis can't save its snapshots."

A screenshot in the Browser affects the server?

@ghost
Copy link

ghost commented May 13, 2019

Cannot do this at the moment. Maybe let me know what specific setting you want. The bot connects the first time but keeps timing out as the botnet continues running.

Ok, can you please show the full log? Even better with debug logging level

No, I ask if a proxy is necessary. Could you please show the bots' configuration (without password) and the IMAP servers' settings?

No. Not a screenshot of the browser. I'm talking about RDB snapshots.
It presents me with the following error: "No space left on device. Redis can't save its snapshots."

Well. Looks like you overloaded your system with too much data. It means what is written there: Redis needs to save snapshots, but there's not enough disk space. So redis is unusable, maybe you see more details in the logs. If intelmq bots do the this error message, you just stop with the message you are seeing, as there's nothing they can do at that moment.

@Hammy25
Copy link
Author

Hammy25 commented May 13, 2019

2019-05-13 06:25:04,215 - Darknet-Mail-URL-Fetcher-Collector - INFO - Received SIGHUP, initializing again later.
2019-05-13 06:25:06,532 - Darknet-Mail-URL-Fetcher-Collector - INFO - Handling SIGHUP, initializing again now.
2019-05-13 06:25:06,599 - Darknet-Mail-URL-Fetcher-Collector - DEBUG - Disconnected from destination pipeline.
2019-05-13 06:25:06,729 - Darknet-Mail-URL-Fetcher-Collector - INFO - MailURLCollectorBot initialized with id Darknet-Mail-URL-Fetcher-Collector and intelmq 1.1.2 and python 3.5.2 (default, Nov 12 2018, 13:43:14) as process 22356.
2019-05-13 06:25:06,729 - Darknet-Mail-URL-Fetcher-Collector - INFO - Bot is starting.
2019-05-13 06:25:06,789 - Darknet-Mail-URL-Fetcher-Collector - DEBUG - Loading pipeline configuration from '/etc/intelmq/pipeline.conf'.
2019-05-13 06:25:06,798 - Darknet-Mail-URL-Fetcher-Collector - DEBUG - Loading Harmonization configuration from '/etc/intelmq/harmonization.conf'.
2019-05-13 06:25:06,811 - Darknet-Mail-URL-Fetcher-Collector - DEBUG - Loading destination pipeline and queues ['Darknet-ShadowServer-Parser-queue'].
2019-05-13 06:25:06,817 - Darknet-Mail-URL-Fetcher-Collector - DEBUG - Connected to destination queues.
2019-05-13 06:25:06,817 - Darknet-Mail-URL-Fetcher-Collector - INFO - Pipeline ready.
2019-05-13 06:27:14,633 - Darknet-Mail-URL-Fetcher-Collector - ERROR - Bot has found a problem.
Traceback (most recent call last):
  File "/usr/lib/python3/dist-packages/intelmq/lib/bot.py", line 170, in start
    self.process()
  File "/usr/lib/python3/dist-packages/intelmq/bots/collectors/mail/collector_mail_url.py", line 42, in process
    mailbox = self.connect_mailbox()
  File "/usr/lib/python3/dist-packages/intelmq/bots/collectors/mail/collector_mail_url.py", line 38, in connect_mailbox
    self.parameters.mail_ssl)
  File "/var/lib/intelmq/.local/lib/python3.5/site-packages/imbox/imbox.py", line 22, in __init__
    ssl_context=ssl_context, starttls=starttls)
  File "/var/lib/intelmq/.local/lib/python3.5/site-packages/imbox/imap.py", line 18, in __init__
    self.server = IMAP4_SSL(self.hostname, self.port, ssl_context=ssl_context)
  File "/usr/lib/python3.5/imaplib.py", line 1269, in __init__
    IMAP4.__init__(self, host, port)
  File "/usr/lib/python3.5/imaplib.py", line 189, in __init__
    self.open(host, port)
  File "/usr/lib/python3.5/imaplib.py", line 1282, in open
    IMAP4.open(self, host, port)
  File "/usr/lib/python3.5/imaplib.py", line 286, in open
    self.sock = self._create_socket()
  File "/usr/lib/python3.5/imaplib.py", line 1272, in _create_socket
    sock = IMAP4._create_socket(self)
  File "/usr/lib/python3.5/imaplib.py", line 276, in _create_socket
    return socket.create_connection((self.host, self.port))
  File "/usr/lib/python3.5/socket.py", line 711, in create_connection
    raise err
  File "/usr/lib/python3.5/socket.py", line 702, in create_connection
    sock.connect(sa)
TimeoutError: [Errno 110] Connection timed out
2019-05-13 06:27:14,633 - Darknet-Mail-URL-Fetcher-Collector - INFO - Bot will continue in 15 seconds.
2019-05-13 06:29:36,968 - Darknet-Mail-URL-Fetcher-Collector - ERROR - Bot has found a problem.
Traceback (most recent call last):
  File "/usr/lib/python3/dist-packages/intelmq/lib/bot.py", line 170, in start
    self.process()
  File "/usr/lib/python3/dist-packages/intelmq/bots/collectors/mail/collector_mail_url.py", line 42, in process
    mailbox = self.connect_mailbox()
  File "/usr/lib/python3/dist-packages/intelmq/bots/collectors/mail/collector_mail_url.py", line 38, in connect_mailbox
    self.parameters.mail_ssl)
  File "/var/lib/intelmq/.local/lib/python3.5/site-packages/imbox/imbox.py", line 22, in __init__
    ssl_context=ssl_context, starttls=starttls)
  File "/var/lib/intelmq/.local/lib/python3.5/site-packages/imbox/imap.py", line 18, in __init__
    self.server = IMAP4_SSL(self.hostname, self.port, ssl_context=ssl_context)
  File "/usr/lib/python3.5/imaplib.py", line 1269, in __init__
    IMAP4.__init__(self, host, port)
  File "/usr/lib/python3.5/imaplib.py", line 189, in __init__
    self.open(host, port)
  File "/usr/lib/python3.5/imaplib.py", line 1282, in open
    IMAP4.open(self, host, port)
  File "/usr/lib/python3.5/imaplib.py", line 286, in open
    self.sock = self._create_socket()
  File "/usr/lib/python3.5/imaplib.py", line 1272, in _create_socket
    sock = IMAP4._create_socket(self)
  File "/usr/lib/python3.5/imaplib.py", line 276, in _create_socket
    return socket.create_connection((self.host, self.port))
  File "/usr/lib/python3.5/socket.py", line 711, in create_connection
    raise err
  File "/usr/lib/python3.5/socket.py", line 702, in create_connection
    sock.connect(sa)
TimeoutError: [Errno 110] Connection timed out
2019-05-13 06:29:36,969 - Darknet-Mail-URL-Fetcher-Collector - INFO - Bot will continue in 15 seconds.
2019-05-13 06:31:59,305 - Darknet-Mail-URL-Fetcher-Collector - ERROR - Bot has found a problem.

Then it continues trying and timing out, a loop that doesn't end.

Ok, can you please show the full log? Even better with debug logging level

Okay. So you believe I'm processing too much data? The solution will be to reduce data or upgrade my hardware? That is what I'd like to know. The Redis log doesn't show much more details but the GetHostByName bot displays the following:

2019-05-10 14:47:48,499 - Botnet-Gethostbyname-Expert - INFO - Processed 500 messages since last logging.
2019-05-10 14:51:46,889 - Botnet-Gethostbyname-Expert - ERROR - Out of disk space. Exit immediately.
Traceback (most recent call last):
  File "/usr/lib/python3/dist-packages/intelmq/lib/pipeline.py", line 136, in send
    self.pipe.lpush(destination_queue, message)
  File "/usr/lib/python3/dist-packages/redis/client.py", line 1227, in lpush
    return self.execute_command('LPUSH', name, *values)
  File "/usr/lib/python3/dist-packages/redis/client.py", line 573, in execute_command
    return self.parse_response(connection, command_name, **options)
  File "/usr/lib/python3/dist-packages/redis/client.py", line 585, in parse_response
    response = connection.read_response()
File "/usr/lib/python3/dist-packages/redis/connection.py", line 582, in read_response
    raise response
redis.exceptions.ResponseError: MISCONF Redis is configured to save RDB snapshots, but is currently not able to persist on disk. Commands that may modify the data set are disabled. Please check Redis logs for details about the error.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/lib/python3/dist-packages/intelmq/lib/bot.py", line 170, in start
    self.process()
  File "/usr/lib/python3/dist-packages/intelmq/bots/experts/gethostbyname/expert.py", line 48, in process
    self.send_message(event)
  File "/usr/lib/python3/dist-packages/intelmq/lib/bot.py", line 392, in send_message
    self.__destination_pipeline.send(raw_message, path=path)
  File "/usr/lib/python3/dist-packages/intelmq/lib/pipeline.py", line 142, in send
    raise IOError(28, 'No space left on device. Redis can\'t save its snapshots.')

Also around the same time the redis log has the following information:

1075:M 10 May 14:51:43.008 * 10000 changes in 60 seconds. Saving...
1075:M 10 May 14:51:43.008 # Can't save in background: fork: Cannot allocate memory
1075:M 10 May 14:51:49.023 * 10000 changes in 60 seconds. Saving...

Well. Looks like you overloaded your system with too much data. It means what is written there: Redis needs to save snapshots, but there's not enough disk space. So redis is unusable, maybe you see more details in the logs. If intelmq bots do the this error message, you just stop with the message you are seeing, as there's nothing they can do at that moment.

Also thank you for the responses. I really appreciate you taking your time to assist me. Thank you.

@ghost
Copy link

ghost commented May 13, 2019

Then it continues trying and timing out, a loop that doesn't end.

The endless loop is ok here IMO, as this is a connection issue, can be/is usually temporary.

In the previous answer you wrote:

The bot connects the first time

Not in the log you showed here. Does it work before the SIGHUP/reload? Would be strange too, because that only causes the configuration to be re-read.

Okay. So you believe I'm processing too much data? The solution will be to reduce data or upgrade my hardware? That is what I'd like to know.

No, there's just too much data at once. Try to start collectors not at the same time or reduce some data. Maybe I can give more specific tips if you let me know your sources or use cases (here or to via mail).

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/intelmq/lib/bot.py", line 170, in start
self.process()
File "/usr/lib/python3/dist-packages/intelmq/bots/experts/gethostbyname/expert.py", line 48, in process
self.send_message(event)
File "/usr/lib/python3/dist-packages/intelmq/lib/bot.py", line 392, in send_message
self.__destination_pipeline.send(raw_message, path=path)
File "/usr/lib/python3/dist-packages/intelmq/lib/pipeline.py", line 142, in send
raise IOError(28, 'No space left on device. Redis can't save its snapshots.')


Also around the same time the redis log has the following information:

1075:M 10 May 14:51:43.008 * 10000 changes in 60 seconds. Saving...
1075:M 10 May 14:51:43.008 # Can't save in background: fork: Cannot allocate memory
1075:M 10 May 14:51:49.023 * 10000 changes in 60 seconds. Saving...

Ok, it says there's not enough memory. I will change the log message to cover this case too.
Redis needs to duplicate all the data in memory for the snapshot :/

If possible and redis allows the data to be modified you could:

  • delete data: clear a queue where you do not need the data or you can easily restore it (web sources, from IMAP etc)
  • dump some reports (because they are typically large) from a queue to disk and restore it later when memory is low again. I attached two very basic scripts to do this, you need to modify them of course. Then start the bots in an order which does not cause the system to collapse again (outputs first and then at the end the collectors, to get the data out).

dump.txt
restore.txt

ghost pushed a commit that referenced this issue May 13, 2019
@Hammy25
Copy link
Author

Hammy25 commented May 14, 2019

I assumed when it says "pipeline ready" it connected successfully. If that's not the case, it seems like it doesn't connect at all. A server problem?

We can discuss about the feeds over email.

Not in the log you showed here. Does it work before the SIGHUP/reload? Would be strange too, because that only causes the configuration to be re-read.

Thanks for the solutions. I will implement them soon and will let you know if it works out well.

@ghost
Copy link

ghost commented May 14, 2019

I assumed when it says "pipeline ready" it connected successfully. If that's not the case, it seems like it doesn't connect at all. A server problem?

Redis does not give this error on the connection establishment, but later on the first write attempt.

@Hammy25
Copy link
Author

Hammy25 commented May 14, 2019

Okay. So it's the configuration that is wrong? Or the server settings? Or network?

Redis does not give this error on the connection establishment, but later on the first write attempt.

@ghost
Copy link

ghost commented May 14, 2019

Okay. So it's the configuration that is wrong? Or the server settings? Or network?

What do you refer to here? There are two problems: The IMAP connection issue and the overloaded Redis. First fix the second, then we can continue with the first. I already made suggestions how to tackle the overloaded redis.

@ghost ghost closed this as completed May 14, 2019
@Hammy25
Copy link
Author

Hammy25 commented May 14, 2019

I was referring to the IMAP connection issue.

What do you refer to here? There are two problems: The IMAP connection issue and the overloaded Redis. First fix the second, then we can continue with the first. I already made suggestions how to tackle the overloaded redis.

@ghost
Copy link

ghost commented May 14, 2019

First fix the redis issue

This issue was closed.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

1 participant