-
Notifications
You must be signed in to change notification settings - Fork 1
Cortex: Shodan integration failing #2
Comments
I checked the logs and I can see that Cortex tried to run a Docker image:
My hypothesis is that because our default Docker |
Ok. Removing
|
I tried changing log levels in <logger name="play" level="DEBUG"/>
<logger name="application" level="DEBUG"/> But that didn't seem to show anything more than before. EDIT: The correct file to edit is |
Related issues: |
This section of docs explains how Cortex uses Docker for running analyzers: |
l tried adding this to
As suggested in TheHive-Project/Cortex#313 (comment) but it didn't help. |
l compared the two sources of analyzers I saw:
But the {
"name": "Shodan_InfoDomain",
"version": "1.0",
"author": "ANSSI",
"url": "https://github.com/TheHive-Project/Cortex-Analyzers/Shodan",
"license": "AGPL-V3",
"description": "Retrieve key Shodan information on a domain.",
"dataTypeList": [
"domain",
"fqdn"
],
"baseConfig": "Shodan",
"config": {
"service": "info_domain"
},
"configurationItems": [
{
"name": "key",
"description": "Define the API Key",
"type": "string",
"multi": false,
"required": true
}
],
"dockerImage": "cortexneurons/shodan_infodomain:1"
} And this appears to be the source: |
According to the Python error it fails on Analyzer initialization: class ShodanAnalyzer(Analyzer):
def __init__(self):
Analyzer.__init__(self) Which calls another init for a worker in class Analyzer(Worker):
def __init__(self, job_directory=None):
Worker.__init__(self, job_directory) Which then apparently tries to read some JSON from standard input: # Load input
self._input = {}
if os.path.isfile('%s/input/input.json' % self.job_directory):
with open('%s/input/input.json' % self.job_directory) as f_input:
self._input = json.load(f_input)
else: # If input file doesn't exist, fallback to old behavior and read input from stdin
self.job_directory = None
self.__set_encoding()
r, w, e = select.select([sys.stdin], [], [], self.READ_TIMEOUT)
if sys.stdin in r:
self._input = json.load(sys.stdin)
else:
self.error('Input file doesn''t exist') Which kinda explains why the |
According to the docs:
One thing to try would be to see if |
I wanted to test if the issue is specific to that one analyzer so I found one that doesn't require any keys:
|
When I run an analyzer I can see these folder and files created in the
Which disappear right after the job finishes or fails, so I can't easily examine their contents. |
I ran this to catch the container while it exists: while true; do docker inspect $(docker ps -q); done And the
And the command line option passed is
So not sure what's causing this to not find the |
One possibility is that the def __init__(self, job_directory):
if job_directory is None:
if len(sys.argv) > 1:
job_directory = sys.argv[1]
else:
job_directory = '/job'
self.job_directory = job_directory Because the other two options - first argument and default of |
I tried modifying FROM cortexneurons/dshield_lookup:original
COPY worker.py /usr/local/lib/python3.8/site-packages/cortexutils/worker.py But Cortex ignores it, as it appears to use the image SHA256 digest, and not the tag:
|
I managed to get an example of the
{
"data": "1.1.1.1",
"dataType": "ip",
"tlp": 2,
"pap": 2,
"message": "",
"parameters": {},
"config": {
"proxy_https": null,
"cacerts": null,
"max_pap": 2,
"jobTimeout": 30,
"service": "query",
"check_tlp": true,
"proxy_http": null,
"max_tlp": 2,
"auto_extract_artifacts": false,
"jobCache": 10,
"check_pap": true
}
} |
I ran the docker image the same way and it worked:
So I'm thinking the issue is our Docker UID remapping. |
When Cortex creates the job folder the permissions look like this:
|
When I mirror the permissions the error is:
Which isn't the same as what we see, but close enough. |
This is necessary because our logging config and UID remapping breaks how Cortex runs it's analyzers/responders. #2 Signed-off-by: Jakub Sokołowski <[email protected]>
@corpetty tried to configure Shodan integration for Cortex but if failed with:
The text was updated successfully, but these errors were encountered: