diff --git a/CHANGELOG.md b/CHANGELOG.md index b776f24041..be3ce0f683 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -1,14 +1,25 @@ # Change Log -## [2.11.0](https://github.com/CERT-BDF/TheHive/tree/HEAD) (2017-05-12) +## [2.11.1](https://github.com/CERT-BDF/TheHive/tree/HEAD) (2017-05-17) + +[Full Changelog](https://github.com/CERT-BDF/TheHive/compare/2.11.0...2.11.1) + +**Implemented enhancements:** + +- Show available reports number for each observable [\#211](https://github.com/CERT-BDF/TheHive/issues/211) +- Merge Duplicate Tasks during Case Merge [\#180](https://github.com/CERT-BDF/TheHive/issues/180) + +**Fixed bugs:** + +- Case templates not applied when converting an alert to a case [\#206](https://github.com/CERT-BDF/TheHive/issues/206) +- Observable of merged cased might have duplicate tags [\#205](https://github.com/CERT-BDF/TheHive/issues/205) +- Error updating case templates [\#204](https://github.com/CERT-BDF/TheHive/issues/204) + +## [2.11.0](https://github.com/CERT-BDF/TheHive/tree/2.11.0) (2017-05-14) [Full Changelog](https://github.com/CERT-BDF/TheHive/compare/2.10.2...2.11.0) **Implemented enhancements:** -- Improve logs browsing [\#128](https://github.com/CERT-BDF/TheHive/issues/128) -- Refresh the UI's skin [\#145](https://github.com/CERT-BDF/TheHive/issues/145) -- Show severity on the "Cases Page" [\#165](https://github.com/CERT-BDF/TheHive/issues/165) -- Update the datalist filter previews to display meaningful values [\#166](https://github.com/CERT-BDF/TheHive/issues/166) - Display the logos of the integrated external services [\#198](https://github.com/CERT-BDF/TheHive/issues/198) - TheHive send to many information to Cortex when an analyze is requested [\#196](https://github.com/CERT-BDF/TheHive/issues/196) - Sort the list of report templates [\#195](https://github.com/CERT-BDF/TheHive/issues/195) @@ -17,26 +28,32 @@ - Connect to Cortex protected by Basic Auth [\#173](https://github.com/CERT-BDF/TheHive/issues/173) - Implement the alerting framework feature [\#170](https://github.com/CERT-BDF/TheHive/issues/170) - Make the flow collapsible, in case details page [\#167](https://github.com/CERT-BDF/TheHive/issues/167) +- Update the datalist filter previews to display meaningful values [\#166](https://github.com/CERT-BDF/TheHive/issues/166) +- Show severity on the "Cases Page" [\#165](https://github.com/CERT-BDF/TheHive/issues/165) +- Add pagination component at the top of all the data lists [\#151](https://github.com/CERT-BDF/TheHive/issues/151) - Connect to Cortex instance via proxy [\#147](https://github.com/CERT-BDF/TheHive/issues/147) +- Disable field autocomplete on the login form [\#146](https://github.com/CERT-BDF/TheHive/issues/146) +- Refresh the UI's skin [\#145](https://github.com/CERT-BDF/TheHive/issues/145) +- Add support of case template in back-end API [\#144](https://github.com/CERT-BDF/TheHive/issues/144) - Proxy authentication [\#143](https://github.com/CERT-BDF/TheHive/issues/143) -- Add pagination component at the top of all the data lists [\#151](https://github.com/CERT-BDF/TheHive/issues/151) +- Improve logs browsing [\#128](https://github.com/CERT-BDF/TheHive/issues/128) +- Improve logs browsing [\#128](https://github.com/CERT-BDF/TheHive/issues/128) +- Feature request: Autocomplete tags [\#119](https://github.com/CERT-BDF/TheHive/issues/119) - Ignored MISP events are no longer visible and cannot be imported [\#107](https://github.com/CERT-BDF/TheHive/issues/107) -- Reordering Tasks [\#21](https://github.com/CERT-BDF/TheHive/issues/21) - MISP import filter / filtering of events [\#86](https://github.com/CERT-BDF/TheHive/issues/86) -- Add support of case template in back-end API [\#144](https://github.com/CERT-BDF/TheHive/issues/144) -- Disable field autocomplete on the login form [\#146](https://github.com/CERT-BDF/TheHive/issues/146) -- Feature request: Autocomplete tags [\#119](https://github.com/CERT-BDF/TheHive/issues/119) +- Reordering Tasks [\#21](https://github.com/CERT-BDF/TheHive/issues/21) **Fixed bugs:** +- Authentication fails with wrong message if database migration is needed [\#200](https://github.com/CERT-BDF/TheHive/issues/200) +- Fix the success message when running a set of analyzers [\#199](https://github.com/CERT-BDF/TheHive/issues/199) - Duplicate HTTP calls in case page [\#187](https://github.com/CERT-BDF/TheHive/issues/187) - Job status refresh [\#171](https://github.com/CERT-BDF/TheHive/issues/171) -- Fix the success message when running a set of analyzers[\#199](https://github.com/CERT-BDF/TheHive/issues/199) -- Authentication fails with wrong message if database migration is needed [\#200](https://github.com/CERT-BDF/TheHive/issues/200) **Closed issues:** - Support for cuckoo malware analysis plattform \(link analysis\) [\#181](https://github.com/CERT-BDF/TheHive/issues/181) +- Scala code cleanup [\#153](https://github.com/CERT-BDF/TheHive/issues/153) **Merged pull requests:** diff --git a/build.sbt b/build.sbt index 86d7cba2ad..0997ddae34 100644 --- a/build.sbt +++ b/build.sbt @@ -104,6 +104,7 @@ packageBin := { (packageBin in Rpm).value } // DEB // +version in Debian := version.value + "-1" debianPackageRecommends := Seq("elasticsearch") debianPackageDependencies += "java8-runtime-headless | java8-runtime" maintainerScripts in Debian := maintainerScriptsFromDirectory( @@ -114,7 +115,7 @@ linuxEtcDefaultTemplate in Debian := (baseDirectory.value / "package" / "etc_def linuxMakeStartScript in Debian := None // RPM // -rpmRelease := "8" +rpmRelease := "1" rpmVendor in Rpm := "TheHive Project" rpmUrl := Some("http://thehive-project.org/") rpmLicense := Some("AGPL") @@ -139,7 +140,7 @@ packageBin in Rpm := { // DOCKER // import com.typesafe.sbt.packager.docker.{ Cmd, ExecCmd } - +version in Docker := version.value + "-1" defaultLinuxInstallLocation in Docker := "/opt/thehive" dockerRepository := Some("certbdf") dockerUpdateLatest := true diff --git a/docker/thehive/thehive.conf b/docker/thehive/thehive.conf deleted file mode 100644 index c1eb27fd4c..0000000000 --- a/docker/thehive/thehive.conf +++ /dev/null @@ -1,5 +0,0 @@ -cortex { - aa { - url = "http://192.168.1.1" - } -} diff --git a/docs/FAQ.md b/docs/FAQ.md deleted file mode 100644 index 9a9707cfd0..0000000000 --- a/docs/FAQ.md +++ /dev/null @@ -1,65 +0,0 @@ -# Cases and Tasks - -- [I Can't Add a Template](https://github.com/CERT-BDF/TheHive/wiki/FAQ#i-cant-add-a-template) -- [Why My Freshly Added Template Doesn't Show Up?](https://github.com/CERT-BDF/TheHive/wiki/FAQ#why-my-freshly-added-template-doesnt-show-up) -- [Can I Use a Specific Template for Imported MISP Events?](https://github.com/CERT-BDF/TheHive/wiki/FAQ#can-i-use-a-specific-template-for-imported-misp-events) - -## Templates -### I Can't Add a Template -You need to log in as an administrator to add a template. - -### Why My Freshly Added Template Doesn't Show Up? -When you add a new template and hit the `+NEW` button, you don't see it because unlike other events that you can see in the Flow, it is not broadcasted to all the user sessions. So you need to refresh the page before clicking the `+NEW` button. - -You don't need to log out then log in again. - -### Can I Use a Specific Template for Imported MISP Events? -Definitely! You just need to add a `caseTemplate` parameter in the section corresponding to the MISP connector in your `conf/application.conf` file. This is described in the [Administrator's Guide](https://github.com/CERT-BDF/TheHive/wiki/Administrator's-guide#48-misp). - -# Analyzers -- [I Would Like to Contribute or Request a New Analyzer](https://github.com/CERT-BDF/TheHive/wiki/FAQ#i-would-like-to-contribute-or-request-a-new-analyzer) - -## General -### I Would Like to Contribute or Request a New Analyzer -Analyzers are no longer bundled with TheHive. Since the release of Buckfast (TheHive 2.10), the analysis engine has been released as a separate product called [Cortex](https://github.com/CERT-BDF/Cortex). If you'd like to develop or ask for an analyzer that will help you get the most out of TheHive, please open a [feature request](https://github.com/CERT-BDF/Cortex-Analyzers/issues/new) first. This will give us a chance to validate the use cases and avoid having multiple persons working on the same analyzer. - -Once validated, you can either develop your analyzer or wait for THeHive Project or a contributor to undertake the task and if everything is alright, we will schedule its addition to a future Cortex release. - -# Miscellaneous Questions - -- [Can I Enable HTTPS to Connect to TheHive?](https://github.com/CERT-BDF/TheHive/wiki/FAQ#can-i-enable-https-to-connect-to-thehive) -- [Can I Import Events from Multiple MISP Servers?](https://github.com/CERT-BDF/TheHive/wiki/FAQ#can-i-import-events-from-multiple-misp-servers) -- [Can I connect TheHive to a AWS ElasticSearch service ?](https://github.com/CERT-BDF/TheHive/wiki/FAQ#can-i-connect-thehive-to-an-aws-elasticsearch-service) -- [Any plan to support elasticsearch 5.x backend in the future ?](https://github.com/CERT-BDF/TheHive/wiki/FAQ#do-you-have-any-plans-for-elasticsearch-5x-support-in-the-future) - -### Can I Enable HTTPS to Connect to TheHive? -#### TL;DR -Add the following lines to `/etc/thehive/application.conf` - - https.port: 9443 - play.server.https.keyStore { - path: "/path/to/keystore.jks" - type: "JKS" - password: "password_of_keystore" - } - -HTTP can disabled by adding line `http.port=disabled` -#### Details -Please read the [relevant section](https://github.com/CERT-BDF/TheHive/wiki/Configuration#9-https) in the Configuration guide. - -### Can I Import Events from Multiple MISP Servers? -Yes, this is possible. For each MISP server, add a `misp` section in your `conf/application.conf` file as described in the [Administrator's Guide](htthttps://github.com/CERT-BDF/TheHive/wiki/Configuration#7-misp). - -### Can I connect TheHive to an AWS ElasticSearch service? -AWS Elasticsearch service only supports HTTP transport protocol. It does not support the binary protocol which the Java client used by TheHive relies on to communicate with ElasticSearch. As a result, it is not possible to setup TheHive with AWS Elasticsearch service. More information is available at the following URLs: -- [http://docs.aws.amazon.com/elasticsearch-service/latest/developerguide/aes-limits.html](https://www.elastic.co/guide/en/elasticsearch/reference/5.1/modules-network.html#_transport_and_http_protocols ) - -> “TCP Transport : The service supports HTTP on port 80, but does not support TCP transport” - -- [https://www.elastic.co/guide/en/elasticsearch/reference/5.1/modules-network.html#_transport_and_http_protocols](https://www.elastic.co/guide/en/elasticsearch/reference/5.1/modules-network.html#_transport_and_http_protocols) -> “TCP Transport : Used for communication between nodes in the cluster, by the Java Transport client and by the Tribe node. -> HTTP: Exposes the JSON-over-HTTP interface used by all clients other than the Java clients.” - -### Do you have any plans for ElasticSearch 5.x support in the future? -We haven't planned it yet. Please note that it's easier to move from ES2 to ES5 than from 1.x to version 2. -We will give it a try as soon as we can and let you know. diff --git a/docs/README.md b/docs/README.md deleted file mode 100644 index 016c6e86aa..0000000000 --- a/docs/README.md +++ /dev/null @@ -1,37 +0,0 @@ -TheHive is a scalable 3-in-1 open source and free security incident response platform designed to make life easier for SOCs, CSIRTs, CERTs and any information security practitioner dealing with security incidents that need to be investigated and acted upon swiftly. - -## Hardware Pre-requisites - -TheHive uses ElasticSearch to store data. Both software use a Java VM. We recommend using a virtual machine with 8vCPU, 8 -GB of RAM and 60 GB of disk. You can also use a physical machine with similar specifications. - -## What's New? - -- [Changelog](/CHANGELOG.md) -- [Migration Guide](migration-guide.md) - -## Installation Guides - -TheHive can be installed using: -- An [RPM package](installation/rpm-guide.md) -- A [DEB package](installation/deb-guide.md) -- [Docker](installation/docker-guide.md) -- [Binary](installation/binary-guide.md) -- [Ansible script](https://github.com/drewstinnett/ansible-thehive) contributed by -[@drewstinnett](https://github.com/drewstinnett) - -TheHive can also be [built from sources](installation/build-guide.md). - -## Administration Guides - -- [Administrator's guide](admin/admin-guide.md) -- [Configuration guide](admin/configuration.md) -- [Updating](admin/updating.md) -- [Backup & Restore](admin/backup-restore.md) - -## Developer Guides - -- [API documentation](api/README.md) - -## Other -- [FAQ](FAQ.md) diff --git a/docs/admin/admin-guide.md b/docs/admin/admin-guide.md deleted file mode 100644 index 317adfd6a7..0000000000 --- a/docs/admin/admin-guide.md +++ /dev/null @@ -1,65 +0,0 @@ -# Administrator's guide - -## 1. User management - -Users can be managed through the `Administration` > `Users` page. Only administrators may access it. Each user is identified by their login, full name and role. - -![users](../files/adminguide_users.png) - -Please note that you still need to create user accounts if you use LDAP or Active Directory authentication. This is necessary for TheHive to retrieve their role and authenticate them against the local database, LDAP and/or AD directories. - -There are 3 roles currently: - - `read` : all non-sensitive data can be read. With this role, a user can't make any change. They can't add a case, task, log or observable. They also can't run analyzers; - - `write`: create, remove and change data of any type. This role is for standard users. `write` role inherits `read` rights; - - `admin`: this role is reserved for TheHive administrators. Users with this role can manage user accounts, metrics, create case templates and observable data types. `admin` inherits `write` rights. - -**Warning**: Please note that user accounts cannot be removed once they have been created, otherwise audit logs will refer to an unknown user. However, unwanted or unused accounts can be locked. - -## 2. Case template management - -Some cases may share the same structure (tags, tasks, description, metrics). Templates are here to automatically add tasks, description or metrics while creating a new case. An user can choose to create an empty case or based on registered template. - -To create a template, as _admin_ go in the administration menu, and open the "Case templates" item. - -![template](../files/adminguide_template.png) - -In this screen, you can add, remove or change template. -A template contains: - * default severity - * default tags - * title prefix (can be changed by user at case creation) - * default TLP - * default default - * task list (title and description) - * metrics - -Except for title prefix, task list and metrics, the user can change values defined in template. - -## 3. Report template management - -When TheHive is connected to a Cortex server, observable can be analyzed to get additional information on them. Cortex outputs report in JSON format. In order to make reports more readable, you can configure report templates. Report templates convert JSON in to HTML using AngularJS template engine. - -For each analyzer available in Cortex you can define two kinds of template: short and long. Short report exposes synthetic information, shows in top of observable page. With short reports you can see a sum-up of all run analyzes. Long report shows detail information only when the user select the report. Raw data in JSON format is always available. - -Report templates can be configure in `Admin` > `Report templates` menu. We offer report templates for default Cortex analyzers. A package with all report templates can be downloaded at https://dl.bintray.com/cert-bdf/thehive/report-templates.zip and can be injected using the `Import templates` button. - -## 4. Metrics management - -Metrics have been integrated to have relevant indicators about cases. - -Metrics are numerical values associated to cases (for example, the number of impacted users). Each metric has a _name_, a _title_ and a _description_, defined by an administrator. When a metric is added to a case, it can't be removed and must be filled. Metrics are used to monitor business indicators, thanks to graphs. - -Metrics are defined globally. To create metrics, as _admin_ got in the administration menu, and open the "Case metrics" item. - -![metrics](../files/adminguide_metrics.png) - - -Metrics are used to create statistics ("Statistics" item in the user profile menu). They can be filtered on time interval, and case with specific tags. - -For example you can show metrics of case with "malspam" tag on January 2016 : - -![statistics](../files/adminguide_statistics.png) - -For graphs based on time, user can choose metrics to show. They are aggregated on interval of time (by day, week, month of year) using a function (sum, min or max). - -Some metrics are predefined (in addition to those defined by administrator) like case handling duration (how much time the case had been open) and number of case opening or closing. diff --git a/docs/admin/backup-restore.md b/docs/admin/backup-restore.md deleted file mode 100644 index d361d8a4dd..0000000000 --- a/docs/admin/backup-restore.md +++ /dev/null @@ -1,46 +0,0 @@ -# Backup and restore data -All persistent data are stored in ElasticSearch database. The backup and restore procedures are the ones that are -detailed in -[ElasticSearch documentation](https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-snapshots.html). - -_Note_: you may have to adapt you indices in the examples below. To find the right indice, use the following command : - -``` -curl 'localhost:9200/_cat/indices?v' -``` - -## 1. Create a snapshot repository -First you must define a location in local filesystem (where ElasticSearch instance runs) where the backup will be -written. Be careful if you run ElasticSearch in Docker, the directory must be mapped in host filesystem using `--volume` -parameter (cf. [Docker documentation](https://docs.docker.com/engine/tutorials/dockervolumes/)). - -Create a ElasticSearch snapshot point with the following command : -``` -$ curl -XPUT 'http://localhost:9200/_snapshot/the_hive_backup' -d '{ - "type": "fs", - "settings": { - "location": "/absolute/path/to/backup/directory", - "compress": true - } -}' -``` - -## 2. Backup your data -Start the backup by executing the following command : -``` -$ curl -XPUT 'http://localhost:9200/_snapshot/the_hive_backup/snapshot_1' -d '{ - "indices": "the_hive_9" -}' -``` -You can backup the last index of TheHive (you can list indices in you ElasticSearch cluster with -`curl -s http://localhost:9200/_cat/indices | cut -d ' ' -f3` ) or all indices with `_all` value. - -## 3. Restore data -Restore will do the reverse actions : it reads backup in your snapshot directory and load indices in ElasticSearch -cluster. This operation is done with this command : -``` -$ curl -XPOST http://localhost:9200/_snapshot/the_hive_backup/snapshot_1/_restore -{ - "indices": "the_hive_9" -} -``` \ No newline at end of file diff --git a/docs/admin/configuration.md b/docs/admin/configuration.md deleted file mode 100644 index 1e3158e3da..0000000000 --- a/docs/admin/configuration.md +++ /dev/null @@ -1,463 +0,0 @@ -## Configuration file - -The configuration file of TheHive is `/etc/thehive/application.conf` by default. This file uses the [HOCON format](https://github.com/typesafehub/config/blob/master/HOCON.md). All configuration parameters should go in this file. - -You can have a look at the [default settings](Default-configuration). - -### 1. Database - -TheHive uses the Elasticsearch search engine to store all persistent data. Elasticsearch is not part of TheHive package. It must be installed and configured as a standalone instance which can be located on the same machine. For more information on how to set up Elasticsearch, please refer to [Elasticsearch installation guide](https://www.elastic.co/guide/en/Elasticsearch/reference/2.3/setup.html). - -Three settings are required to connect to Elasticsearch: - * the base name of the index - * the name of the cluster - * the address(es) and port(s) of the Elasticsearch instance - -The Defaults settings are: - -``` -# Elasticsearch -search { - # Name of the index - index = the_hive - # Name of the Elasticsearch cluster - cluster = hive - # Address of the Elasticsearch instance - host = ["127.0.0.1:9300"] - # Scroll keepalive - keepalive = 1m - # Size of the page for scroll - pagesize = 50 - # Number of shards - nbshards = 5 - # Number of replicas - nbreplicas = 1 -} -``` - -If you use a different configuration, modify the parameters accordingly in the `application.conf` file. - -If multiple Elasticsearch nodes are used as a cluster, addresses of the master nodes must be used for the `search.host` setting. All cluster nodes must use the same cluster name: - -``` -search { - host = ["node1:9300", "node2:9300"] - ... -``` - -TheHive uses the [`transport`](https://www.elastic.co/guide/en/Elasticsearch/reference/2.3/modules-transport.html#_tcp_transport) port (9300/tcp by default) and not the [`http`](https://www.elastic.co/guide/en/Elasticsearch/reference/current/modules-http.html) port (9200/tcp). - -TheHive versions index schema (mapping) in Elasticsearch. Version number are appended to index base name (the 8th version of the schema uses the index `the_hive_8` if `search.index = the_hive`). - -When too many documents are requested to TheHive, it uses the [scroll](https://www.elastic.co/guide/en/Elasticsearch/reference/2.3/search-request-scroll.html) feature: the results are retrieved through pagination. You can specify the size of the page (`search.pagesize`) and how long pages are kept in Elasticsearch ((`search.keepalive`) before purging. - -### 2. Datastore - -TheHive stores attachments as Elasticsearch documents. They are split in chunks and each chunk sent to Elasticsearch is identified by the hash of the entire attachment and the associated chunk number. - -The chunk size (`datastore.chunksize`) can be changed but any change will only affect new attachments. Existing ones won't be changed. - -An attachment is identified by its hash. The algorithm used is configurable (`datastore.hash.main`) but must not be changed after the first attachment insertion. Otherwise, previous files cannot be retrieved. - -Extra hash algorithms can be configured using `datastore.hash.extra`. These hashes are not used to identify the attachment but are shown in the user interface (the hash associated to the main algorithm is also shown). If you change extra algorithms, you should inform TheHive and ask it to recompute all hashes. Please note that the associated API call is currently disabled in Buckfast (v 2.10). It will be reinstated in the next release. - -Observables can contain malicious data. When you try to download an attachment from an observable (typically a file), it is automatically zipped and the resulting ZIP file is password-protected. The default password is **malware** but it can be changed with the `datastore.attachment.password` setting. - -Default values are: - -``` -# Datastore -datastore { - name = data - # Size of stored data chunks - chunksize = 50k - hash { - # Main hash algorithm /!\ Don't change this value - main = "SHA-256" - # Additional hash algorithms (used in attachments) - extra = ["SHA-1", "MD5"] - } - attachment.password = "malware" -} -``` - -### 3. Authentication - -TheHive supports local, LDAP and Active Directory (AD) for authentication. By default, it relies on local credentials stored in Elasticsearch. - -Authentication methods are stored in the `auth.type` parameter, which is multi-valued. When an user logs in, each authentication method is tried in order until one succeeds. If no authentication method works, an error is returned and the user cannot log in. - -The Default values within the configuration file are: -``` -auth { - # "type" parameter contains authentication provider. It can be multi-valued (useful for migration) - # available auth types are: - # services.LocalAuthSrv : passwords are stored in user entity (in Elasticsearch). No configuration are required. - # ad : use ActiveDirectory to authenticate users. Configuration is under "auth.ad" key - # ldap : use LDAP to authenticate users. Configuration is under "auth.ldap" key - type = [local] - - ad { - # Domain Windows name using DNS format. This parameter is required. - #domainFQDN = "mydomain.local" - - # Domain Windows name using short format. This parameter is required. - #domainName = "MYDOMAIN" - - # Use SSL to connect to domain controller - #useSSL = true - } - - ldap { - # LDAP server name or address. Port can be specified (host:port). This parameter is required. - #serverName = "ldap.mydomain.local:389" - - # Use SSL to connect to directory server - #useSSL = true - - # Account to use to bind on LDAP server. This parameter is required. - #bindDN = "cn=thehive,ou=services,dc=mydomain,dc=local" - - # Password of the binding account. This parameter is required. - #bindPW = "***secret*password***" - - # Base DN to search users. This parameter is required. - #baseDN = "ou=users,dc=mydomain,dc=local" - - # Filter to search user {0} is replaced by user name. This parameter is required. - #filter = "(cn={0})" - } -} - -# Maximum time between two requests without requesting authentication -session { - warning = 5m - inactivity = 1h -} -``` - -To enable authentication using AD or LDAP, edit the `application.conf` file and supply the values for your environment. Then you need to create an account on TheHive for each AD or LDAP user in `Administration > Users` page (which can only be accessed by an administrator). This is required as TheHive needs to look up the role associated with the user and that role is stored locally by TheHive. Obviously, you don't need to supply a password as TheHive will check the credentials against the remote directory. - -In order to use SSL on LDAP or AD, TheHive must be able to validate remote certificates. To that end, the Java truststore must contain certificate authorities used to generate the AD and/or LDAP certificates. The Default JVM truststore contains the main official authorities but LDAP and AD certificates are probably not issued by them. - -Use [keytool](https://docs.oracle.com/javase/8/docs/technotes/tools/unix/keytool.html) to create the truststore: -``` -keytool -import -file /path/to/your/ca.cert -alias InternalCA -keystore /path/to/your/truststore.jks -``` - -Then add `-Djavax.net.ssl.trustStore=/path/to/your/truststore.jks` parameter when you start TheHive or put it in the `JAVA_OPTS` environment variable before starting TheHive. - -### 4. Streaming (a.k.a The Flow) -The user interface is automatically updated when data is changed in the back-end. To do this, the back-end sends events to all the connected front-end.s The mechanism used to notify the front-end is called long polling and its settings are: - -* `refresh` : when there is no notification, close the connection after this duration (the default is 1 minute). -* `cache` : before polling a session must be created, in order to make sure no event is lost between two polls. If there is no poll during the `cache` setting, the session is destroyed (the default is 15 minutes). -* `nextItemMaxWait`, `globalMaxWait` : when an event occurs, it is not immediately sent to the front-ends. The back-end waits `nextItemMaxWait` and up to `globalMaxWait` in case another event can be included in the notification. This mechanism saves many HTTP requests. - -Default values are: -``` -# Streaming -stream.longpolling { - # Maximum time a stream request waits for new element - refresh = 1m - # Lifetime of the stream session without request - cache = 15m - nextItemMaxWait = 500ms - globalMaxWait = 1s -} -``` - -### 5. Entity size limit -The Play framework used by TheHive sets the HTTP body size limit to 100KB by default for textual content (json, xml, text, form data) and 10MB for file uploads. This could be too small in most cases so you may want to change it with the following settings in the `application.conf` file: - -``` -# Max textual content length -play.http.parser.maxMemoryBuffer=1M -# Max file size -play.http.parser.maxDiskBuffer=1G -``` - -*Note*: if you are using a NGINX reverse proxy in front of TheHive, be aware that it doesn't distinguish between text data and a file upload. So, you should also set the `client_max_body_size` parameter in your NGINX server configuration to the highest value among two: file upload and text size defined in TheHive `application.conf` file. - -### 6. Cortex -TheHive can use one or several [Cortex](https://github.com/CERT-BDF/Cortex) analysis engines to get additional information on observables. When configured, analyzers available in Cortex become usable on TheHive. First you must enable `CortexConnector`, choose an identifier then specify the URL for each Cortex server: -``` -## Enable the Cortex module -play.modules.enabled += connectors.cortex.CortexConnector - -cortex { - "CORTEX-SERVER-ID" { - # URL of the Cortex server - url = "" - } - # HTTP client configuration, more details in section 8 - # ws { - # proxy {} - # ssl {} - # } -} -``` - -Cortex analyzes observables and outputs reports in JSON format. TheHive show the report as-is by default. In order to make reports more readable, we provide report templates which are in a separate package and must be installed manually: - - download the report template package from https://dl.bintray.com/cert-bdf/thehive/report-templates.zip - - log in TheHive using an administrator account - - go to `Admin` > `Report templates` menu - - click on `Import templates` button and select the downloaded package - -HTTP client used by Cortex connector use global configuration (in `play.ws`) but can be overridden in Cortex section and in each Cortex server configuration. Refer to section 8 for more detail on how to configure HTTP client. -### 7. MISP -TheHive has the ability to connect to one or several MISP servers. Within the configuration file, you can register your MISP server(s) under the `misp` configuration keyword. Each server shall be identified using an arbitrary name, its `url`, the corresponding authentication `key` and optional `tags` to add to the corresponding cases when importing MISP events. - -#### 7.1 Minimal Configuration -To sync with a MISP server and retrieve events, edit the `application.conf` file and adjust the example shown below to your setup: - -``` -## Enable the MISP module -play.modules.enabled += connectors.misp.MispConnector - -misp { - "MISP-SERVER-ID" { - # URL of the MISP server - url = "" - - # authentication key - key = "" - - # tags that must be automatically added to the case corresponding to the imported event - tags = ["misp"] - - # truststore configuration using "cert" key is deprecated - #cert = /path/to/truststore.jsk - - # HTTP client configuration, more details in section 8 - # ws { - # proxy {} - # ssl {} - # } - } - # Interval between two MISP event import in hours (h) or minutes (m) - interval = 1h -} -``` -HTTP client used by MISP connector use global configuration (in `play.ws`) but can be overridden in MISP section and in each MISP server configuration (in `misp.MISP-SERVER-ID.ws`). Refer to section 8 for more detail on how to configure HTTP client. - -Before TheHive 2.11 one could set truststore using `cert` key. This setting is now deprecated. It support will be remove in next major version (2.12). It can be easily replaced : - - before: -``` -misp { - [...] - cert = "/path/to/truststore.jks" -} -``` - - after: -``` -misp { - [...] - ws.ssl.trustManager.stores = [ - { - type: "JKS" - path: "/path/to/truststore.jks" - } - ] -} -``` - -#### 7.2 Associate a Case Template to MISP Imports -As stated in the subsection above, TheHive is able to automatically import MISP events and create cases out of them. This operation leverages the template engine. Thus you'll need to create a case template prior to importing MISP events. - -First, create a case template. Let's call it **MISP_CASETEMPLATE**. - -![](../files/MISP_caseTemplate.png) - -Then update TheHive's configuration to add a 'caseTemplate' parameter as shown in the example below: - -``` -misp { - "MISP-SERVER-ID" { - # URL of the MISP server - url = "" - # authentication key - key = "" - # tags that must be automatically added to the case corresponding to the imported event - tags = ["misp"] - # case template - caseTemplate = "MISP_CASETEMPLATE" - } - -``` - -Once the configuration file has been edited, restart TheHive. Every new import of MISP event will generate a case according to the "MISP_CASETEMPLATE" template. - -### 8. HTTP client configuration - -HTTP client can be configured by adding `ws` key in sections that needs to connect to remote HTTP service. The key can contains configuration items defined in [play WS configuration](https://www.playframework.com/documentation/2.5.x/ScalaWS#Configuring-WS): - - - `ws.followRedirects`: Configures the client to follow 301 and 302 redirects (default is true). - - `ws.useragent`: To configure the User-Agent header field. - - `ws.compressionEnabled`: Set it to true to use gzip/deflater encoding (default is false). - -#### Timeouts -There are 3 different timeouts in WS. Reaching a timeout causes the WS request to interrupt. - - `ws.timeout.connection`: The maximum time to wait when connecting to the remote host (default is 120 seconds). - - `ws.timeout.idle`: The maximum time the request can stay idle (connection is established but waiting for more data) (default is 120 seconds). - - `ws.timeout.request`: The total time you accept a request to take (it will be interrupted even if the remote host is still sending data) (default is 120 seconds). - -#### Proxy -Proxy can be used. By default, proxy configured in JVM is used but one can configured specific configuration for each HTTP client. - - `ws.useProxyProperties`: To use the JVM system’s HTTP proxy settings (http.proxyHost, http.proxyPort) (default is true). This setting is ignored if `ws.proxy' settings is present. - - `ws.proxy.host`: The hostname of the proxy server. - - `ws.proxy.post`: The port of the proxy server. - - `ws.proxy.protocol`: The protocol of the proxy server. Use "http" or "https". Defaults to "http" if not specified. - - `ws.proxy.user`: The username of the credentials for the proxy server. - - `ws.proxy.password`: The password for the credentials for the proxy server. - - `ws.proxy.ntlmDomain`: The password for the credentials for the proxy server. - - `ws.proxy.encoding`: The realm's charset. - - `ws.proxy.proxyNonProxyHosts`: The list of host on which proxy must not be used. - -#### SSL -SSL of HTTP client can be completely configured in `application.conf` file. - -##### Certificate manager -Certificate manager is used to store client certificates and certificate authorities. - -`keyManager` indicates which certificate HTTP client can use to authenticate itself on remote host (when certificate based authentication is used) -``` - ws.ssl.keyManager { - stores = [ - { - type: "pkcs12" // JKS or PEM - path: "mycert.p12" - password: "password1" - } - ] - } -``` -Certificate authorities are configured using `trustManager` key. It is used to establish secure connexion with remote host. Server certificate must be signed by a trusted certificate authority. -``` - ws.ssl.trustManager { - stores = [ - { - type: "JKS" // JKS or PEM - path: "keystore.jks" - password: "password1" - } - ] - } -``` -##### Debugging -To debug the key manager / trust manager, set the following flags: -``` - ws.ssl.debug = { - ssl = true - trustmanager = true - keymanager = true - sslctx = true - handshake = true - verbose = true - data = true - certpath = true - } -``` - -##### Protocols -If you want to define a different default protocol, you can set it specifically in the client: -``` -ws.ssl.protocol = "TLSv1.2" -``` -If you want to define the list of enabled protocols, you can do so explicitly: -``` -ws.ssl.enabledProtocols = ["TLSv1.2", "TLSv1.1", "TLSv1"] -``` -##### Ciphers -Cipher suite can be configured using `ws.ssl.enabledCipherSuites`: -``` -ws.ssl.enabledCipherSuites = [ - "TLS_DHE_RSA_WITH_AES_128_GCM_SHA256", - "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256", - "TLS_DHE_RSA_WITH_AES_256_GCM_SHA384", - "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384", -] -``` - -### 9. Monitoring and Performance Metrics - -Performance metrics (response time, call rate to Elasticsearch and HTTP request, throughput, memory used...) can be collected if enabled in configuration. - -Enable it by editing the `application.conf` file, and add: - -``` -# Register module for dependency injection -play.modules.enabled += connectors.metrics.MetricsModule - -metrics.enabled = true -``` - -These metrics can optionally be sent to an external database (graphite, ganglia or influxdb) in order to monitor the health of the platform. This feature is disabled by default. - -``` -metrics { - name = default - enabled = true - rateUnit = SECONDS - durationUnit = SECONDS - showSamples = false - jvm = true - logback = true - - graphite { - enabled = false - host = "127.0.0.1" - port = 2003 - prefix = thehive - rateUnit = SECONDS - durationUnit = MILLISECONDS - period = 10s - } - - ganglia { - enabled = false - host = "127.0.0.1" - port = 8649 - mode = UNICAST - ttl = 1 - version = 3.1 - prefix = thehive - rateUnit = SECONDS - durationUnit = MILLISECONDS - tmax = 60 - dmax = 0 - period = 10s - } - - influx { - enabled = false - url = "http://127.0.0.1:8086" - user = root - password = root - database = thehive - retention = default - consistency = ALL - #tags = { - # tag1 = value1 - # tag2 = value2 - #} - period = 10s - } -} -``` -### 10. HTTPS -To enable HTTPS in the application, add the following lines to `/etc/thehive/application.conf`: -``` - https.port: 9443 - play.server.https.keyStore { - path: "/path/to/keystore.jks" - type: "JKS" - password: "password_of_keystore" - } -``` -As HTTPS is enabled HTTP can be disabled by adding `http.port=disabled` in configuration. - -To import your certificate in the keystore, depending on your situation, you can follow [Digital Ocean's tutorial](https://www.digitalocean.com/community/tutorials/java-keytool-essentials-working-with-java-keystores). - -**More information**: -This is a setting of the Play framework that is documented on its website. Please refer to [https://www.playframework.com/documentation/2.5.x/ConfiguringHttps](https://www.playframework.com/documentation/2.5.x/ConfiguringHttps). diff --git a/docs/admin/default-configuration.md b/docs/admin/default-configuration.md deleted file mode 100644 index b19263677f..0000000000 --- a/docs/admin/default-configuration.md +++ /dev/null @@ -1,184 +0,0 @@ -You can find below the default configuration settings of TheHive: - -``` -# Register module for dependency injection -play.modules.enabled += global.TheHive - -# handler for requests (check if database is in maintenance or not) -#play.http.requestHandler = TheHiveHostRequestHandler - -play.http.filters = global.TheHiveFilters - -# ElasticSearch -search { - # Name of the index - index = the_hive - # Name of the ElasticSearch cluster - cluster = hive - # Address of the ElasticSearch instance - host = ["127.0.0.1:9300"] - # Scroll keepalive - keepalive = 1m - # Size of the page for scroll - pagesize = 50 -} - -# Datastore -datastore { - name = data - # Size of stored data chunks - chunksize = 50k - hash { - # Main hash algorithm /!\ Don't change this value - main = "SHA-256" - # Additional hash algorithms (used in attachments) - extra = ["SHA-1", "MD5"] - } - attachment.password = "malware" -} - -auth { - # "type" parameter contains authentication provider. It can be multi-valued (useful for migration) - # available auth types are: - # services.LocalAuthSrv : passwords are stored in user entity (in ElasticSearch). No configuration are required. - # ad : use ActiveDirectory to authenticate users. Configuration is under "auth.ad" key - # ldap : use LDAP to authenticate users. Configuration is under "auth.ldap" key - type = [local] - - ad { - # Domain Windows name using DNS format. This parameter is required. - #domainFQDN = "mydomain.local" - - # Domain Windows name using short format. This parameter is required. - #domainName = "MYDOMAIN" - - # Use SSL to connect to domain controller - #useSSL = true - } - - ldap { - # LDAP server name or address. Port can be specified (host:port). This parameter is required. - #serverName = "ldap.mydomain.local:389" - - # Use SSL to connect to directory server - #useSSL = true - - # Account to use to bind on LDAP server. This parameter is required. - #bindDN = "cn=thehive,ou=services,dc=mydomain,dc=local" - - # Password of the binding account. This parameter is required. - #bindPW = "***secret*password***" - - # Base DN to search users. This parameter is required. - #baseDN = "ou=users,dc=mydomain,dc=local" - - # Filter to search user {0} is replaced by user name. This parameter is required. - #filter = "(cn={0})" - } -} - -# Maximum time between two requests without requesting authentication -session { - warning = 5m - inactivity = 1h -} - -# Streaming -stream.longpolling { - # Maximum time a stream request waits for new element - refresh = 1m - # Lifetime of the stream session without request - cache = 15m - nextItemMaxWait = 500ms - globalMaxWait = 1s -} - -# Name of the ElasticSearch type used to store dblist /!\ Don't change this value -dblist.name = dblist -# Name of the ElasticSearch type used to store audit event /!\ Don't change this value -audit.name = audit -# Name of the ElasticSearch type used to store attachment /!\ Don't change this value -datastore.name = data - -# Cortex configuration -######## - -cortex { - #"CORTEX-SERVER-ID" { - # # URL of MISP server - # url = "" - #} -} - -# MISP configuration -######## - -misp { - #"MISP-SERVER-ID" { - # # URL of MISP server - # url = "" - # # authentication key - # key = "" - # #tags to be added to imported artifact - # tags = ["misp"] - #} - - # truststore to used to validate MISP certificate (if default truststore is not suffisient) - #cert = /path/to/truststore.jsk - - # Interval between two MISP event import - interval = 1h -} - -# Metrics configuration -######## - -metrics { - name = default - enabled = false - rateUnit = SECONDS - durationUnit = SECONDS - jvm = true - logback = true - - graphite { - enabled = false - host = "127.0.0.1" - port = 2003 - prefix = thehive - rateUnit = SECONDS - durationUnit = MILLISECONDS - period = 10s - } - - ganglia { - enabled = false - host = "127.0.0.1" - port = 8649 - mode = UNICAST - ttl = 1 - version = 3.1 - prefix = thehive - rateUnit = SECONDS - durationUnit = MILLISECONDS - tmax = 60 - dmax = 0 - period = 10s - } - - influx { - enabled = false - url = "http://127.0.0.1:8086" - user = root - password = root - database = thehive - retention = default - consistency = ALL - #tags = { - # tag1 = value1 - # tag2 = value2 - #} - period = 10s - } -} -``` diff --git a/docs/admin/updating.md b/docs/admin/updating.md deleted file mode 100644 index b985ef2728..0000000000 --- a/docs/admin/updating.md +++ /dev/null @@ -1,6 +0,0 @@ -# Update TheHive -TheHive is simple to update. You only need to replace your current package files by new ones. If the schema of the data changes between the two versions, the first request to the application asks the user to start a data migration. In this case, authentication is not required. - -![update](../files/adminguide_update.png) - -This process creates a new index in ElasticSearch (suffixed by the version of the schema) and copies all the data on it (before adapting its format). It is always possible to rollback to the previous version but all modifications done on the new version will be lost. diff --git a/docs/api/README.md b/docs/api/README.md deleted file mode 100644 index a0ae4b657b..0000000000 --- a/docs/api/README.md +++ /dev/null @@ -1,11 +0,0 @@ -# TheHive API - -TheHive exposes REST APIs through JSON over HTTP. - -- [HTTP request format](request.md) -- [Authentication](authentication.md) -- [Model](model.md) -- [case](case.md) -- [observable](artifact.md) -- [task](task.md) -- [log](log.md) \ No newline at end of file diff --git a/docs/api/artifact.md b/docs/api/artifact.md deleted file mode 100644 index 9fcbe77d5e..0000000000 --- a/docs/api/artifact.md +++ /dev/null @@ -1,90 +0,0 @@ -# Observable - -## Model definition - -Required attributes: - - - `data` (string) : content of the observable (read only). An observable can't contain data and attachment attributes - - `attachment` (attachment) : observable file content (read-only). An observable can't contain data and attachment - attributes - - `dataType` (enumeration) : type of the observable (read only) - - `message` (text) : description of the observable in the context of the case - - `startDate` (date) : date of the observable creation **default=now** - - `tlp` (number) : [TLP](https://www.us-cert.gov/tlp) (`-1`: `unknown`; `0`: `white`; `1`: `green`; `2: amber`; - `3: red`) **default=2** - - `ioc` (boolean) : indicates if the observable is an IOC **default=false** - - `status` (artifactStatus) : status of the observable (*Ok* or *Deleted*) **default=Ok** - -Optional attributes: - - `tags` (multi-string) : observable tags - -## Observable manipulation - -### Observable methods - -|HTTP Mehod |URI |Action | -|-----------|----------------------------------------|--------------------------------------| -|POST |/api/case/artifact/_search |Find observables | -|POST |/api/case/artifact/_stats |Compute stats on observables | -|POST |/api/case/:caseId/artifact |Create an observable | -|GET |/api/case/artifact/:artifactId |Get an observable | -|DELETE |/api/case/artifact/:artifactId |Remove an observable | -|PATCH |/api/case/artifact/:artifactId |Update an observable | -|GET |/api/case/artifact/:artifactId/similar |Get list of similar observables | -|PATCH |/api/case/artifact/_bulk |Update observables in bulk | - -### List Observables of a Case -Complete observable list of a case can be retrieve by performing a search: -``` -POST /api/case/artifact/_search -``` -Parameters: - - `query`: `{ "_parent": { "_type": "case", "_query": { "_id": "<>" } } }` - - `range`: `all` - -\<\\> must be replaced by case id (not the case number !) - -## Task manipulation - -### Create a task -The URL used to create a task is: -``` -POST /api/case/<>/task -``` -\<\\> must be replaced by case id (not the case number !) - -Required task attributes (cf. models) must be provided. - -This call returns attributes of the created task. - -#### Examples -Creation of a simple task in case `AVqqdpY2yQ6w1DNC8aDh`: -``` -curl -XPOST -u myuser:mypassword -H 'Content-Type: application/json' http://127.0.0.1:9000/api/case/AVqqdpY2yQ6w1DNC8aDh/task -d '{ - "title": "Do something" -}' -``` -It returns: -``` -{ - "createdAt": 1488918771513, - "status": "Waiting", - "createdBy": "myuser", - "title": "Do something", - "order": 0, - "user": "myuser", - "flag": false, - "id":"AVqqeXc9yQ6w1DNC8aDj", - "type":"case_task" -} -``` - -Creation of another task: -``` -curl -XPOST -u myuser:mypassword -H 'Content-Type: application/json' http://127.0.0.1:9000/api/case/AVqqdpY2yQ6w1DNC8aDh/task -d '{ - "title": "Analyze the malware", - "description": "The malware XXX is analyzed using sandbox ...", - "owner": "Joe", - "status": "InProgress" -}' -``` diff --git a/docs/api/authentication.md b/docs/api/authentication.md deleted file mode 100644 index 0d16e5a5c8..0000000000 --- a/docs/api/authentication.md +++ /dev/null @@ -1,12 +0,0 @@ -# Authentication - -Most API calls requires authentication. Credentials can be provided using a session cookie or directly using HTTP basic -authentication. (API key is not usable in the current version of TheHive, due to a rethinking of service account -**TODO need issue reference**). - -Session cookie is suitable for browser authentication, not for a dedicated tool. The easiest solution if you want to -write a tool that use TheHive API is to use basic authentication. For example, to list cases, use the following curl -command: -``` -curl -u mylogin:mypassword http://127.0.0.1:9000/api/cases -``` diff --git a/docs/api/case.md b/docs/api/case.md deleted file mode 100644 index f95b4d8106..0000000000 --- a/docs/api/case.md +++ /dev/null @@ -1,94 +0,0 @@ -# Case - -## Model definition - -Required attributes: - - `title` (text) : title of the case - - `description` (text) : description of the case - - `severity` (number) : severity of the case (0: not set; 1: low; 2: medium; 3: high) **default=2** - - `startDate` (date) : date and time of the begin of the case **default=now** - - `owner` (string) : user to whom the case has been assigned **default=use who create the case** - - `flag` (boolean) : flag of the case **default=false** - - `tlp` (number) : [TLP](https://www.us-cert.gov/tlp) (`-1`: `unknown`; `0`: `white`; `1`: `green`; `2: amber`; - `3: red`) **default=-1** - - `status` (caseStatus) : status of the case (*Open*, *Resolved* or *Deleted*) **default=Open** - -Optional attributes: - - `tags` (multi-string) : case tags - - `resolutionStatus` (caseResolutionStatus) : resolution status of the case (*Indeterminate*, *FalsePositive*, - *TruePositive*, *Other* or *Duplicated*) - - `impactStatus` (caseImpactStatus) : impact status of the case (*NoImpact*, *WithImpact* or *NotApplicable*) - - `summary` (text) : summary of the case, to be provided when closing a case - - `endDate` (date) : resolution date - - `metrics` (metrics) : list of metrics - -Attributes generated by the backend: - - `caseId` (number) : Id of the case (auto-generated) - - `mergeInto` (string) : ID of the case created by the merge - - `mergeFrom` (multi-string) : IDs of the cases that were merged - -## Case Manipulation - -### Case methods - -|HTTP Mehod |URI |Action | -|-----------|----------------------------------------|--------------------------------------| -|GET |/api/case |List cases | -|POST |/api/case/_search |Find cases | -|PATCH |/api/case/_bulk |Update cases in bulk | -|POST |/api/case/_stats |Compute stats on cases | -|POST |/api/case |Create a case | -|GET |/api/case/:caseId |Get a case | -|PATCH |/api/case/:caseId |Update a case | -|DELETE |/api/case/:caseId |Remove a case | -|GET |/api/case/:caseId/links |Get list of cases linked to this case | -|POST |/api/case/:caseId1/_merge/:caseId2 |Merge two cases | - - -### Create a Case - -A case can be created using the following url : -``` -POST /api/case -``` -Required case attributes (cf. models) must be provided. - -This call returns attributes of the created case. - -#### Examples -Creation of a simple case: -``` -curl -XPOST -u myuser:mypassword -H 'Content-Type: application/json' http://127.0.0.1:9000/api/case -d '{ - "title": "My first case", - "description": "This case have been created by my custom script" -}' -``` -It returns: -``` -{ - "severity": 3, - "createdBy": "myuser", - "createdAt": 1488918582777, - "caseId": 1, - "title": "My first case", - "startDate": 1488918582836, - "owner": "myuser", - "status": "Open", - "description": "This case have been created by my custom script", - "user": "myuser", - "tlp": 2, - "flag": false, - "id": "AVqqdpY2yQ6w1DNC8aDh", - "type":"case" -} -``` -Creation of another case: -``` -curl -XPOST -u myuser:mypassword -H 'Content-Type: application/json' http://127.0.0.1:9000/api/case -d '{ - "title": "My second case", - "description": "This case have been created by my custom script, its severity is high, tlp is red and it contains tags", - "severity": 3, - "tlp": 3, - "tags": ["automatic", "creation"] -}' -``` diff --git a/docs/api/log.md b/docs/api/log.md deleted file mode 100644 index 7b7e8bc030..0000000000 --- a/docs/api/log.md +++ /dev/null @@ -1,85 +0,0 @@ -# Log - -## Model definition - -Required attributes: - - `message` (text) : content of the Log - - `startDate` (date) : date of the log submission **default=now** - - `status` (logStatus) : status of the log (*Ok* or *Deleted*) **default=Ok** - -Optional attributes: - - `attachment` (attachment) : file attached to the log - -## Log manipulation - -### Log methods - -|HTTP Mehod |URI |Action | -|-----------|----------------------------------------|--------------------------------------| -|GET |/api/case/task/:taskId/log |Get logs of the task | -|POST |/api/case/task/log/_search |Find logs | -|POST |/api/case/task/:taskId/log |Create a log | -|PATCH |/api/case/task/log/:logId |Update a log | -|DELETE |/api/case/task/log/:logId |Remove a log | -|GET |/api/case/task/log/:logId |Get a log | - -### Create a log -The URL used to create a task is: -``` -POST /api/case/task/<>/log -``` -\<\\> must be replaced by task id - -Required log attributes (cf. models) must be provided. - -This call returns attributes of the created log. - -#### Examples -Creation of a simple log in task `AVqqeXc9yQ6w1DNC8aDj`: -``` -curl -XPOST -u myuser:mypassword -H 'Content-Type: application/json' http://127.0.0.1:9000/api/case/task/AVqqeXc9yQ6w1DNC8aDj/log -d '{ - "message": "Some message" -}' -``` -It returns: -``` -{ - "startDate": 1488919949497, - "createdBy": "admin", - "createdAt": 1488919949495, - "user": "myuser", - "message":"Some message", - "status": "Ok", - "id": "AVqqi3C-yQ6w1DNC8aDq", - "type":"case_task_log" -} -``` - -If log contain attachment, request must be in multipart format: -``` -curl -XPOST -u myuser:mypassword http://127.0.0.1:9000/api/case/AVqqdpY2yQ6w1DNC8aDh/task -F '_json={"message": "Screenshot of fake site"};type=application/json' -F 'attachment=@screenshot1.png;type=image/png' -``` -It returns: -``` -{ - "createdBy": "myuser", - "message": "Screenshot of fake site", - "createdAt": 1488920587391, - "startDate": 1488920587394, - "user": "myuser", - "status": "Ok", - "attachment": { - "name": "screenshot1.png", - "hashes": [ - "086541e99743c6752f5fd4931e256e6e8d5fc7afe47488fb9e0530c390d0ca65", - "8b81e038ae0809488f20b5ec7dc91e488ef601e2", - "c5883708f42a00c3ab1fba5bbb65786c" - ], - "size": 15296, - "contentType": "image/png", - "id": "086541e99743c6752f5fd4931e256e6e8d5fc7afe47488fb9e0530c390d0ca65" - }, - "id": "AVqqlSy0yQ6w1DNC8aDx", - "type": "case_task_log" -} -``` diff --git a/docs/api/model.md b/docs/api/model.md deleted file mode 100644 index 53b861153c..0000000000 --- a/docs/api/model.md +++ /dev/null @@ -1,26 +0,0 @@ -# TheHive Model Definition - -## Field Types - - - `string` : textual data (example "malware"). - - `text` : textual data. The difference between `string` and `text` is in the way content can be searched.`string` is - searchable as-is whereas `text`, words (token) are searchable, not the whole content (example "Ten users have received - this ransomware"). - - `date` : date and time using timestamps with milliseconds format. - - `boolean` : true or false - - `number` : numeric value - - `metrics` : JSON object that contains only numbers - -Field can be prefixed with `multi-` in order to indicate that multiple values can be provided. - -## Common Attributes - -All entities share the following attributes: - - `createdBy` (text) : login of the user who create the entity - - `createdAt` (date) : date and time of the creation - - `updatedBy` (text) : login of the user who do the last update of the entity - - `upadtedAt` (date) : date and time of the last update - - `user` (text) : same value as `createdBy` (this field is deprecated) -This attributes are handled by back-end and can't be directly updated. - - diff --git a/docs/api/request.md b/docs/api/request.md deleted file mode 100644 index bff40a6f06..0000000000 --- a/docs/api/request.md +++ /dev/null @@ -1,41 +0,0 @@ -## Request formats - -TheHive accepts several parameter formats within a HTTP request. They can be used indifferently. Input data can be: -- a query string -- URL-encoded form -- multi-part -- JSON - -Hence, the requests below are equivalent. - -### Query String -``` -curl -XPOST 'http://127.0.0.1:9000/api/login?user=me&password=secret' -``` - -### URL-encoded Form -``` -curl -XPOST 'http://127.0.0.1:9000/api/login' -d user=me -d password=secret -``` - -### JSON -``` -curl -XPOST http://127.0.0.1:9000/api/login -H 'Content-Type: application/json' -d '{ - "user": "me", - "password": "secret" -}' -``` - -### Multi-part -``` -curl -XPOST http://127.0.0.1:9000/api/login -F '_json=<-;type=application/json' << _EOF_ -{ - "user": "me", - "password": "secret" -} -_EOF_ -``` - -## Response Format - -TheHive outputs JSON data. diff --git a/docs/api/task.md b/docs/api/task.md deleted file mode 100644 index 0fc0fea096..0000000000 --- a/docs/api/task.md +++ /dev/null @@ -1,16 +0,0 @@ -# Task - -## Model definition - -Required attributes: - - `title` (text) : title of the task - - `status` (taskStatus) : status of the task (*Waiting*, *InProgress*, *Completed* or *Cancel*) **default=Waiting** - - `flag` (boolean) : flag of the task **default=false** - -Optional attributes: - - `owner` (string) : user who owns the task. This is automatically set to current user when status is set to - *InProgress* - - `description` (text) : task details - - `startDate` (date) : date of the beginning of the task. This is automatically set when status is set to *Open* - - `endDate` (date) : date of the end of the task. This is automatically set when status is set to *Completed* - diff --git a/docs/files/Architecture.png b/docs/files/Architecture.png deleted file mode 100644 index c5ccb3e42c..0000000000 Binary files a/docs/files/Architecture.png and /dev/null differ diff --git a/docs/files/MISP_caseTemplate.png b/docs/files/MISP_caseTemplate.png deleted file mode 100644 index 23b71d822e..0000000000 Binary files a/docs/files/MISP_caseTemplate.png and /dev/null differ diff --git a/docs/files/Workflow.png b/docs/files/Workflow.png deleted file mode 100644 index dd223f33fa..0000000000 Binary files a/docs/files/Workflow.png and /dev/null differ diff --git a/docs/files/adminguide_metrics.png b/docs/files/adminguide_metrics.png deleted file mode 100644 index c67407ad77..0000000000 Binary files a/docs/files/adminguide_metrics.png and /dev/null differ diff --git a/docs/files/adminguide_statistics.png b/docs/files/adminguide_statistics.png deleted file mode 100644 index 2d8240e0bc..0000000000 Binary files a/docs/files/adminguide_statistics.png and /dev/null differ diff --git a/docs/files/adminguide_template.png b/docs/files/adminguide_template.png deleted file mode 100644 index 8ea377123d..0000000000 Binary files a/docs/files/adminguide_template.png and /dev/null differ diff --git a/docs/files/adminguide_update.png b/docs/files/adminguide_update.png deleted file mode 100644 index 7f042167bd..0000000000 Binary files a/docs/files/adminguide_update.png and /dev/null differ diff --git a/docs/files/adminguide_users.png b/docs/files/adminguide_users.png deleted file mode 100644 index f835631461..0000000000 Binary files a/docs/files/adminguide_users.png and /dev/null differ diff --git a/docs/files/installguide_create_admin.png b/docs/files/installguide_create_admin.png deleted file mode 100644 index b5a4d0a2ac..0000000000 Binary files a/docs/files/installguide_create_admin.png and /dev/null differ diff --git a/docs/files/installguide_login.png b/docs/files/installguide_login.png deleted file mode 100644 index 74224d5c59..0000000000 Binary files a/docs/files/installguide_login.png and /dev/null differ diff --git a/docs/files/installguide_update_database.png b/docs/files/installguide_update_database.png deleted file mode 100644 index 7f7e4b9094..0000000000 Binary files a/docs/files/installguide_update_database.png and /dev/null differ diff --git a/docs/installation/README.md b/docs/installation/README.md deleted file mode 100644 index b3f9eae00e..0000000000 --- a/docs/installation/README.md +++ /dev/null @@ -1,9 +0,0 @@ -# Installation guides - -TheHive can be installed using: -- [rpm package](rpm-guide.md) -- [deb package](deb-guide.md) -- [docker](docker-guide.md) -- [binary](binary-guide.md) -- [ansible script](https://github.com/drewstinnett/ansible-thehive) contributed by [@drewstinnett](https://github.com/drewstinnett) -- [build from sources](build-guide.md) \ No newline at end of file diff --git a/docs/installation/binary-guide.md b/docs/installation/binary-guide.md deleted file mode 100644 index a6d01bf86e..0000000000 --- a/docs/installation/binary-guide.md +++ /dev/null @@ -1,157 +0,0 @@ -# Installation Guide for Ubuntu 16.04 LTS - -This guide describes the manual installation of TheHive from binaries in Ubuntu 16.04. - -# 1. Minimal Ubuntu Installation - -Install a minimal Ubuntu 16.04 system with the following software: - * Java runtime environment 1.8+ (JRE) - * ElasticSearch 2.x - -Make sure your system is up-to-date: - -``` -sudo apt-get update -sudo apt-get upgrade -``` - -# 2. Install a Java Virtual Machine -You can install either Oracle Java or OpenJDK. - -## 2.1. Oracle Java -``` -echo 'deb http://ppa.launchpad.net/webupd8team/java/ubuntu trusty main' | sudo tee -a /etc/apt/sources.list.d/java.list -sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-key EEA14886 -sudo apt-get update -sudo apt-get install oracle-java8-installer -``` - -Once Oracle Java is installed, go directly to section -[3. Install and Prepare your Database](#3-install-and-prepare-your-database). - -## 2.2 OpenJDK -``` -sudo add-apt-repository ppa:openjdk-r/ppa -sudo apt-get update -sudo apt-get install openjdk-8-jre-headless - -``` - -# 3. Install ElasticSearch - -Installation of ElasticSearch is describe in the following [guide](elasticsearch-guide.md) - -# 4. Install TheHive - -Binary package can be downloaded at [thehive-latest.zip](https://dl.bintray.com/cert-bdf/thehive/thehive-latest.zip) - -After configuring TheHive, if you use Cortex, don't forget to install -[report templates](../admin-guid.md#3-report-template-management). - -## 4.1. Install from Binaries - -Download and unzip the chosen binary package. TheHive files can be installed wherever you want on the filesystem. In -this guide, we decided to set it in `/opt`. - -``` -cd /opt -wget https://dl.bintray.com/cert-bdf/thehive/thehive-latest.zip -unzip thehive-latest.zip -ln -s thehive-x.x.x thehive -``` - -### 4.2. Configuration - -#### 4.2.1 Required configuration - -Please refer the [configuration guide](../admin/configuration.md) for full information on TheHive configuration. -The only required parameter in order to start TheHive is the key of the server (`play.crypto.secret`). This key is used -to authenticate cookies that contain data. If TheHive runs in cluster mode, all instance must share the same key. -You can generate the minimal configuration with the following command lines (they assume that you have created a -dedicated user for TheHive, named thehive): - -``` -sudo mkdir /etc/thehive -(cat << _EOF_ -# Secret key -# ~~~~~ -# The secret key is used to secure cryptographics functions. -# If you deploy your application to several instances be sure to use the same key! -play.crypto.secret="$(cat /dev/urandom | tr -dc 'a-zA-Z0-9' | fold -w 64 | head -n 1)" -_EOF_ -) | sudo tee -a /etc/thehive/application.conf -``` - -Now you can start TheHive. - -For advanced configuration, please, refer to the [configuration page](../admin/configuration.md) and default configuration -information you can find [here](../admin/default-configuration.md). You will especially find all the necessary information to -connect TheHive with Cortex and MISP. - - -### 4.3. First start - -Change your current directory to TheHive installation directory (`/opt/thehive` in this guide), then execute: -``` -bin/thehive -Dconfig.file=/etc/thehive/application.conf -``` - -It is recommended to use a dedicated non-privilege user to start TheHive. If so, make sure that your user can create log -file in `/var/log/thehive/` - -This command starts an HTTP service on port 9000/tcp. You can change the port by adding "http.port=8080" in the -configuration file or add the "-Dhttp.port=8080" parameter to the command line. If you run TheHive using a -non-privileged user, you can't bind a port under 1024. - - -If you'd rather start the application as a service, do the following: -``` -sudo addgroup thehive -sudo adduser --system thehive -sudo cp /opt/thehive/package/thehive.service /usr/lib/systemd/system -sudo chown -R thehive:thehive /opt/thehive -sudo chown thehive:thehive /etc/thehive/application.conf -sudo chmod 640 /etc/thehive/application.conf -sudo systemctl enable thehive -sudo service thehive start -``` - -Please note that the service may take some time to start. - -Then open your browser and connect to http://YOUR_SERVER_ADDRESS:9000/ - -The first time you connect you will have to create the database schema. Click "Migrate database" to create the DB schema. - -![](../files/installguide_update_database.png) - -Once done, you should be redirected to the page for creating the administrator's account. - -![](../files/installguide_create_admin.png) - -Once created, you should be redirected to the login page. - -![](../files/installguide_login.png) - -**Warning**: at this stage, if you missed the creation of the admin user, you will not be able to do it unless you -delete the index in ElasticSearch. In the case you made a mistake, just delete the index with the following command -(beware, it deletes everything in the database) -``` -curl -X DELETE http://127.0.0.1:9200/the_hive_9 -``` - -And reload the page or restart TheHive. - -## 5. Update - -To update TheHive from binaries, just stop the service, download the latest package, rebuild the link `/opt/thehive` and -restart the service. - -``` -service thehive stop -cd /opt -wget https://dl.bintray.com/cert-bdf/thehive/thehive-latest.zip -unzip thehive-latest.zip -rm /opt/thehive && ln -s thehive-x.x.x thehive -chown -R thehive:thehive /opt/thehive /opt/thehive-x.x.x -service thehive start -``` diff --git a/docs/installation/build-guide.md b/docs/installation/build-guide.md deleted file mode 100644 index 91d45a3e82..0000000000 --- a/docs/installation/build-guide.md +++ /dev/null @@ -1,194 +0,0 @@ -# Build from sources - -This document is a step-by-step guide to build TheHive from sources. - -## 1. Pre-requisites - -The following softwares are required to download and build TheHive. - -* Java Development Kit 8 (JDK) - * downloadable from http://www.oracle.com/technetwork/java/javase/downloads/index.html -* git - * Use the system package or downloadable it from http://www.git-scm.com/downloads -* ElasticSearch 2.3 - * downloadable from https://www.elastic.co/downloads/past-releases/elasticsearch-2-3-5 -* NodeJs with its package manager (NPM) - * downloadable from https://nodejs.org/en/download/ -* Grunt - * After NodeJs installation, run `sudo npm install -g grunt-cli` -* Bower - * After NodeJs installation, run `sudo npm install -g bower` - - -# 2. Quick Build Guide - -To install the requirements and build TheHive from sources, please follow the instructions below depending on your operating system. - -## 2.1. CentOS/RHEL - -### 2.1.1. Packages - -``` -sudo yum -y install git bzip2 -``` - -### 2.1.2. Installation of OpenJDK - -``` -sudo yum -y install java-1.8.0-openjdk-devel -``` - -### 2.1.3. Installation of ElasticSearch - -Download and install the public signing key: - -``` -sudo rpm --import https://packages.elastic.co/GPG-KEY-elasticsearch -``` - -Add the following in your `/etc/yum.repos.d/` directory in a file with a `.repo` suffix, for example `elasticsearch.repo`: - -``` -cat << __EOF | sudo tee /etc/yum.repos.d/elasticsearch.repo -[elasticsearch-2.x] -name=Elasticsearch repository for 2.x packages -baseurl=https://packages.elastic.co/elasticsearch/2.x/centos -gpgcheck=1 -gpgkey=https://packages.elastic.co/GPG-KEY-elasticsearch -enabled=1 -__EOF -``` - -Your repository is ready for use. You can install ElasticSearch with: -``` -sudo yum -y install elasticsearch -``` - -### 2.1.4. Installation of NodeJs - -Install the EPEL Repository: - -You should have the "extras" repository enabled, then: -``` -sudo yum -y install epel-release -``` - -Then, you can install NodeJs: - -``` -sudo yum -y install nodejs -``` - -### 2.1.5. Installation of bower and grunt - -``` -sudo npm install -g bower grunt-cli -``` - -## 2.2. Ubuntu - -### 2.2.1. Packages - -``` -sudo apt-get install git wget -``` - -### 2.2.2. Installation of Oracle JDK - -``` -echo 'deb http://ppa.launchpad.net/webupd8team/java/ubuntu trusty main' | sudo tee -a /etc/apt/sources.list.d/java.list -sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-key EEA14886 -sudo apt-get update -sudo apt-get install oracle-java8-installer -``` - -### 2.2.3. Installation of ElasticSearch - -``` -sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-key D88E42B4 -echo "deb https://packages.elastic.co/elasticsearch/2.x/debian stable main" | sudo tee -a /etc/apt/sources.list.d/elasticsearch-2.x.list -sudo apt-get update && sudo apt-get install elasticsearch -``` - -### 2.2.4. Installation of NodeJs - -``` -sudo apt-get install wget -wget -qO- https://deb.nodesource.com/setup_4.x | sudo bash - -sudo apt-get install nodejs -``` - -### 2.2.5. Installation of bower and grunt - -``` -sudo npm install -g bower grunt-cli -``` - -## 2.3. TheHive - -### Download sources - -``` -git clone https://github.com/CERT-BDF/TheHive.git -``` - -### Build the projects - -``` -cd TheHive -bin/activator clean stage -``` - -It will download all dependencies (could be long) then build the back-end. -This command clean previous build files and create an autonomous package in `target/universal/stage` directory. This packages contains TheHive binaries with required libraries (`/lib`), analyzers (`/analyzers`), configuration files (`/conf`) and startup scripts (`/bin`). - -Binaries are built and stored in `TheHive/target/universal/stage/`. Install them in `/opt/thehive` for example. - -``` -sudo cp -r TheHive/target/universal/stage /opt/thehive -``` - -Follow the [configuration part of the installation guide](Installation-guide#42-configuration) to run TheHive. - - -### Configure and start elasticsearch - -Edit `/etc/elasticsearch/elasticsearch.yml` and add the following lines: - -``` -network.host: 127.0.0.1 -script.inline: on -cluster.name: hive -threadpool.index.queue_size: 100000 -threadpool.search.queue_size: 100000 -threadpool.bulk.queue_size: 1000 -``` - -Start the service: - -``` -service elasticsearch restart -``` - - -### First start - -Follow [4.3. First start in the Installation guide](Installation-guide#43-first-start) to start using TheHive. - - -## Build the front-end only -Building back-end builds also front-end, so you don't need to build front-end separately. This section is useful only for troubleshooting or in order to install front-end in a reverse proxy. - -Go to front-end directory: -``` -cd TheHive/ui -``` - -Install NodeJs libraries (required by building step), bower libraries (javascript libraries downloaded by browser). Then build the front-end : -``` -npm install -bower install -grunt build -``` - -This step generates static files (html, javascript and related resources) in `dist` directory. These files are ready to be imported in http server. diff --git a/docs/installation/deb-guide.md b/docs/installation/deb-guide.md deleted file mode 100644 index ca8558823f..0000000000 --- a/docs/installation/deb-guide.md +++ /dev/null @@ -1,15 +0,0 @@ -# Installation of TheHive using DEB package - -Debian packages are published on Bintray repository. All packages are signed using the key [562CBC1C](/PGP-PUBLIC-KEY) -(fingerprint: 0CD5 AC59 DE5C 5A8E 0EE1 3849 3D99 BB18 562C BC1C): - -``` -echo 'deb https://dl.bintray.com/cert-bdf/debian any main' | sudo tee -a /etc/apt/sources.list.d/thehive-project.list -sudo apt-key adv --keyserver hkp://pgp.mit.edu --recv-key 562CBC1C -sudo apt-get update -sudo apt-get install thehive -``` - -After package installation, you should install ElasticSearch -(see [ElasticSearch installation guide](elasticsearch-guide.md)) and configure TheHive -(see [configuration guide](../admin/configuration.md)) \ No newline at end of file diff --git a/docs/installation/docker-guide.md b/docs/installation/docker-guide.md deleted file mode 100644 index 93e54c7f77..0000000000 --- a/docs/installation/docker-guide.md +++ /dev/null @@ -1,88 +0,0 @@ -# Install TheHive using docker - -This guide assume that you will use docker. - -## How to use this image - -From version 2.11, TheHive docker image doesn't come with ElasticSearch. As TheHive requires it to work, you can: - - use docker-compose - - manually install and configure ElasticSearch. - -### Use of docker-compose - -Docker-compose can start multiple dockers and link them together. It can be installed using the -[documentation](https://docs.docker.com/compose/install/). -The following [docker-compose.yml](https://raw.githubusercontent.com/CERT-BDF/TheHive/master/docker/docker-compose.yml) -file starts ElasticSearch, Cortex and TheHive: -``` -version: "2" -services: - elasticsearch: - image: elasticsearch:2 - command: [ - -Des.script.inline=on, - -Des.cluster.name=hive, - -Des.threadpool.index.queue_size=100000, - -Des.threadpool.search.queue_size=100000, - -Des.threadpool.bulk.queue_size=1000] - cortex: - image: certbdf/cortex:latest - ports: - - "0.0.0.0:9001:9000" - thehive: - image: certbdf/thehive:latest - depends_on: - - elasticsearch - - cortex - ports: - - "0.0.0.0:9000:9000" -``` -Put this file in an empty folder and run `docker-compose up`. TheHive is exposed on 9000/tcp port and cortex on -9001/tcp. These ports can be changed by modifying docker-compose file. - -You can specify custom application.conf file by adding the line -`volume: /path/to/application.conf:/etc/thehive/application.conf` in `thehive` section. - -You should define where data (ElasticSearch database) will be stored in your server by adding the line -`volume: /path/to/data:/usr/share/elasticsearch/data` in `elasticsearch` section. - -### Manual installation of ElasticSearch - -ElasticSearch can be installed in the same server as TheHive or not. You can then configure TheHive according to the -[documentation](../admin/configuration.md) and run TheHive docker as follow: -``` -docker run --volume /path/to/thehive/application.conf:/etc/thehive/application.conf certbdf/thehive:latest --no-config -``` - -You can add `--publish` docker option to expose TheHive HTTP service. - -## Customize TheHive docker - -By Default, TheHive docker add minimal configuration: - - choose a random secret (play.crypto.secret) - - search ElasticSearch instance (host named `elasticsearch`) and add it to configuration - - search Cortex instance (host named `cortex`) and add it to configuration - -This behavious can be disabled by adding `--no-config` to docker command line: -`docker run certbdf/thehive:latest --no-config` or by adding the line `command: --no-config` in `thehive` section of -docker-compose file. - -Docker image accepts more options: - - --no-config : do not try to configure TheHive (add secret and elasticsearch) - - --no-config-secret : do not add random secret to configuration - - --no-config-es : do not add elasticsearch hosts to configuration - - --es-hosts : use this string to configure elasticsearch hosts (format: ["host1:9300","host2:9300"]) - - --es-hostname : resolve this hostname to find elasticseach instances - - --secret : secret to secure sessions - - --cortex-proto : define protocol to connect to Cortex (default: http) - - --cortex-port : define port to connect to Cortex (default: 9000) - - --cortex-url : add Cortex connection - - --cortex-hostname : resolve this hostname to find Cortex instances - - -you must install and configure ElasticSearch - -Easiest way to start TheHive: -``` -docker run certbdf/thehive -``` \ No newline at end of file diff --git a/docs/installation/elasticsearch-guide.md b/docs/installation/elasticsearch-guide.md deleted file mode 100644 index 95b3c852c6..0000000000 --- a/docs/installation/elasticsearch-guide.md +++ /dev/null @@ -1,69 +0,0 @@ -# Installation guide of ElasticSearch - -ElasticSearch can be installed using system package or docker. The latter is preferred as its installation and update -are easier. - -## Install ElasticSearch using system package -Install the ElasticSearch package provided by Elastic: -``` -sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-key D88E42B4 -echo "deb https://packages.elastic.co/elasticsearch/2.x/debian stable main" | sudo tee -a /etc/apt/sources.list.d/elasticsearch-2.x.list -sudo apt-get update && sudo apt-get install elasticsearch -``` - -The Debian package does not start up the service by default. The reason for this is to prevent the instance from -accidentally joining a cluster, without being configured appropriately. - -If you prefer using ElasticSearch inside a docker, see -[ElasticSearch inside a Docker](#elasticsearch-inside-a-docker). - -### ElasticSearch configuration - -It is **highly recommended** to avoid exposing this service to an untrusted zone. - -If ElasticSearch and TheHive run on the same host (and not in a docker), edit `/etc/elasticsearch/elasticsearch.yml` and -set `network.host` parameter with `127.0.0.1`. -TheHive use dynamic scripts to make partial updates. Hence, they must be activated using `script.inline: on`. - -The cluster name must also be set ("hive" for example). - -Threadpool queue size must be set with a high value (100000). The default size will get the queue easily overloaded. - -Edit `/etc/elasticsearch/elasticsearch.yml` and add the following lines: - -``` -network.host: 127.0.0.1 -script.inline: on -cluster.name: hive -threadpool.index.queue_size: 100000 -threadpool.search.queue_size: 100000 -threadpool.bulk.queue_size: 1000 -``` - -### Start the Service -Now that ElasticSearch is configured, start it as a service: -``` -sudo systemctl enable elasticsearch.service -sudo service elasticsearch start -``` - -Note that by default, the database is stored in `/var/lib/elasticsearch`. - -## ElasticSearch inside a Docker - -You can also start ElasticSearch inside a docker. Use the following command and do not forget to specify the absolute -path for persistent data on your host : - -``` -docker run \ - --publish 127.0.0.1:9200:9200 \ - --publish 127.0.0.1:9300:9300 \ - --volume /absolute/path/to/persistent/data/:/usr/share/elasticsearch/data \ - --rm \ - elasticsearch:2 \ - -Des.script.inline=on \ - -Des.cluster.name=hive \ - -Des.threadpool.index.queue_size=100000 \ - -Des.threadpool.search.queue_size=100000 \ - -Des.threadpool.bulk.queue_size=1000 -``` diff --git a/docs/installation/rpm-guide.md b/docs/installation/rpm-guide.md deleted file mode 100644 index f701a2a9a6..0000000000 --- a/docs/installation/rpm-guide.md +++ /dev/null @@ -1,19 +0,0 @@ -# Installing TheHive Using an RPM Package - -TheHive's RPM packages are published on our Bintray repository. All packages are PGP signed using the key which ID is [562CBC1C](/PGP-PUBLIC-KEY). The key's fingerprint is: - -```0CD5 AC59 DE5C 5A8E 0EE1 3849 3D99 BB18 562C BC1C``` - -To intall TheHive from an RPM package, you'll need to begin by installing the RPM release package using the following command: -``` -yum install install https://dl.bintray.com/cert-bdf/rpm/thehive-project-release-1.0.0-3.noarch.rpm -``` -This will install TheHive Project's repository in `/etc/yum.repos.d/thehive-rpm.repo` and the GPG public key `in -/etc/pki/rpm-gpg/GPG-TheHive-Project`. - -Once done, you will able to install TheHive package using yum: -``` -yum install thehive -``` - -One installed, you should [install ElasticSearch](elasticsearch-guide.md) and [configure TheHive](../admin/configuration.md). diff --git a/docs/migration-guide.md b/docs/migration-guide.md deleted file mode 100644 index 37ce870eca..0000000000 --- a/docs/migration-guide.md +++ /dev/null @@ -1,63 +0,0 @@ -# Migration guide - -## From 2.10.x to 2.11.x - -### Database migration - -At the first connection to TheHive 2.11, a migration of the database will be asked. This will create a new ElastciSearch - index (the_hive_9). See [Updating](admin/updating.md). - -### MISP to alert - -MISP synchronization is now done using alerting framework. MISP events are seen like other alert. You can use -[TheHive4py](https://github.com/CERT-BDF/TheHive4py) to create your own alert. - -### Configuration changes - -#### MISP certificate authority deprecated - -Specifying certificate authority in MISP configuration using "cert" key is now deprecated. You must replace it by -- before: -``` -misp { - [...] - cert = "/path/to/truststore.jks" -} -``` -- after: -``` -misp { - [...] - ws.ssl.trustManager.stores = [ - { - type: "JKS" - path: "/path/to/truststore.jks" - } - ] -} -``` - -`ws` key can be placed in MISP server section or in global MISP section. In the latter, ws configuration will be applied - on all MISP instances. - -#### Cortex and MISP HTTP client options - -HTTP client used by Cortex and MISP is more configurable. Proxy can be configured, with or without authentication. Refer - to [configuration](admin/configuration.md#8-http-client-configuration) for all possible options. - - -### Packages - -#### New RPM and DEB packages - -RPM and DEB package is now provided. This make installation easier the using binary package (zip). See -[Debian installation guide](installation/deb-guide.md) and [RPM installation guid](installation/rpm-guide.md). - -#### Docker - -All-in-One docker (containing TheHive and Cortex) is not provided any longer. New TheHive docker image doesn't contain -ElasticSearch. We recommend to use docker-compose to link TheHive, ElasticSearch and Cortex dockers. For more information, -see [docker guide](installation/docker-guide.md). - -TheHive configuration is located at /etc/thehive/application.conf for all packages. If you use docker package you must -update its location (previously was /opt/docker/conf/application.conf). diff --git a/thehive-backend/app/models/Alert.scala b/thehive-backend/app/models/Alert.scala index 37db3b9265..c4f2a67e4c 100644 --- a/thehive-backend/app/models/Alert.scala +++ b/thehive-backend/app/models/Alert.scala @@ -31,7 +31,7 @@ trait AlertAttributes { val caze: A[Option[String]] = optionalAttribute("case", F.stringFmt, "Id of the case, if created") val title: A[String] = attribute("title", F.textFmt, "Title of the alert") val description: A[String] = attribute("description", F.textFmt, "Description of the alert") - val severity: A[Long] = attribute("severity", F.numberFmt, "Severity if the alert (0-5)", 3L) + val severity: A[Long] = attribute("severity", F.numberFmt, "Severity if the alert (0-3)", 2L) val tags: A[Seq[String]] = multiAttribute("tags", F.stringFmt, "Alert tags") val tlp: A[Long] = attribute("tlp", F.numberFmt, "TLP level", 2L) val artifacts: A[Seq[JsObject]] = multiAttribute("artifacts", F.objectFmt(artifactAttributes), "Artifact of the alert") diff --git a/thehive-backend/app/models/Artifact.scala b/thehive-backend/app/models/Artifact.scala index 8169db04a2..6d613398e2 100644 --- a/thehive-backend/app/models/Artifact.scala +++ b/thehive-backend/app/models/Artifact.scala @@ -6,7 +6,7 @@ import javax.inject.{ Inject, Provider, Singleton } import scala.concurrent.{ ExecutionContext, Future } import scala.language.postfixOps import akka.stream.Materializer -import play.api.libs.json.{ JsNull, JsObject, JsString, JsValue } +import play.api.libs.json.{ JsNull, JsObject, JsString, JsValue, JsArray } import play.api.libs.json.JsLookupResult.jsLookupResultToJsLookup import play.api.libs.json.JsValue.jsValueToJsLookup import play.api.libs.json.Json @@ -47,6 +47,11 @@ class ArtifactModel @Inject() ( implicit val ec: ExecutionContext) extends ChildModelDef[ArtifactModel, Artifact, CaseModel, Case](caseModel, "case_artifact") with ArtifactAttributes with AuditedModel { override val removeAttribute: JsObject = Json.obj("status" → ArtifactStatus.Deleted) + override def apply(attributes: JsObject) = { + val tags = (attributes \ "tags").asOpt[Seq[JsString]].getOrElse(Nil).distinct + new Artifact(this, attributes + ("tags" → JsArray(tags))) + } + // this method modify request in order to hash artifact and manager file upload override def creationHook(parent: Option[BaseEntity], attrs: JsObject): Future[JsObject] = { val keys = attrs.keys diff --git a/thehive-backend/app/models/Case.scala b/thehive-backend/app/models/Case.scala index 25efe41ca1..a277dce37c 100644 --- a/thehive-backend/app/models/Case.scala +++ b/thehive-backend/app/models/Case.scala @@ -34,7 +34,7 @@ trait CaseAttributes { _: AttributeDef ⇒ val caseId: A[Long] = attribute("caseId", F.numberFmt, "Id of the case (auto-generated)", O.model) val title: A[String] = attribute("title", F.textFmt, "Title of the case") val description: A[String] = attribute("description", F.textFmt, "Description of the case") - val severity: A[Long] = attribute("severity", F.numberFmt, "Severity if the case is an incident (0-5)", 3L) + val severity: A[Long] = attribute("severity", F.numberFmt, "Severity if the case is an incident (0-3)", 2L) val owner: A[String] = attribute("owner", F.stringFmt, "Owner of the case") val startDate: A[Date] = attribute("startDate", F.dateFmt, "Creation date", new Date) val endDate: A[Option[Date]] = optionalAttribute("endDate", F.dateFmt, "Resolution date") diff --git a/thehive-backend/app/services/AlertSrv.scala b/thehive-backend/app/services/AlertSrv.scala index 29c11ff5f3..6e6c9d181a 100644 --- a/thehive-backend/app/services/AlertSrv.scala +++ b/thehive-backend/app/services/AlertSrv.scala @@ -8,7 +8,7 @@ import akka.stream.Materializer import akka.stream.scaladsl.{ Sink, Source } import connectors.ConnectorRouter import models._ -import org.elastic4play.controllers.{ AttachmentInputValue, Fields, FileInputValue } +import org.elastic4play.controllers.{ Fields, FileInputValue } import org.elastic4play.services._ import play.api.{ Configuration, Logger } import play.api.libs.json._ @@ -111,7 +111,7 @@ class AlertSrv( } } - private def getCaseTemplate(alert: Alert) = { + def getCaseTemplate(alert: Alert) = { val templateName = alert.caseTemplate() .orElse(templates.get(alert.tpe())) .getOrElse(alert.tpe()) @@ -130,16 +130,16 @@ class AlertSrv( case Some(connector: AlertTransformer) ⇒ connector.createCase(alert) case _ ⇒ getCaseTemplate(alert).flatMap { caseTemplate ⇒ - caseSrv.create(Fields.empty - .set("title", (caseTemplate - .flatMap(_.titlePrefix()) - .getOrElse("") + s" #${alert.sourceRef()} " + alert.title()) - .trim) - .set("description", alert.description()) - .set("severity", JsNumber(alert.severity())) - .set("tags", JsArray(alert.tags().map(JsString))) - .set("tlp", JsNumber(alert.tlp())) - .set("status", CaseStatus.Open.toString)) + println(s"Create case using template $caseTemplate") + caseSrv.create( + Fields.empty + .set("title", s"#${alert.sourceRef()} " + alert.title()) + .set("description", alert.description()) + .set("severity", JsNumber(alert.severity())) + .set("tags", JsArray(alert.tags().map(JsString))) + .set("tlp", JsNumber(alert.tlp())) + .set("status", CaseStatus.Open.toString), + caseTemplate) .flatMap { caze ⇒ setCase(alert, caze).map(_ ⇒ caze) } .flatMap { caze ⇒ val artifactsFields = alert.artifacts() @@ -177,9 +177,9 @@ class AlertSrv( } caze } - createdCase.onComplete { + createdCase.onComplete { _ ⇒ // remove temporary files - case _ ⇒ artifactsFields + artifactsFields .flatMap(_.get("Attachment")) .foreach { case FileInputValue(_, file, _) ⇒ Files.delete(file) diff --git a/thehive-backend/app/services/CaseMergeSrv.scala b/thehive-backend/app/services/CaseMergeSrv.scala index 0164e855af..50edd8cb0b 100644 --- a/thehive-backend/app/services/CaseMergeSrv.scala +++ b/thehive-backend/app/services/CaseMergeSrv.scala @@ -1,33 +1,22 @@ package services import java.util.Date - import javax.inject.{ Inject, Singleton } -import scala.concurrent.{ ExecutionContext, Future } -import scala.math.BigDecimal.long2bigDecimal - import akka.Done import akka.stream.Materializer import akka.stream.scaladsl.Sink - -import play.api.libs.json.{ JsArray, JsBoolean, JsNull, JsNumber, JsObject, JsString, JsValue } -import play.api.libs.json.JsValue.jsValueToJsLookup -import play.api.libs.json.Json - +import models._ import org.elastic4play.controllers.{ AttachmentInputValue, Fields } import org.elastic4play.models.BaseEntity -import org.elastic4play.services.AuthContext -import org.elastic4play.services.QueryDSL - -import models.{ Artifact, ArtifactStatus, Case, CaseImpactStatus, CaseResolutionStatus, CaseStatus, Task } +import org.elastic4play.services.{ AuthContext, EventMessage, EventSrv } import play.api.Logger -import scala.util.Success +import play.api.libs.json.JsValue.jsValueToJsLookup +import play.api.libs.json._ + +import scala.concurrent.{ ExecutionContext, Future } +import scala.math.BigDecimal.long2bigDecimal import scala.util.Failure -import models.TaskStatus -import models.LogStatus -import org.elastic4play.services.EventMessage -import org.elastic4play.services.EventSrv case class MergeArtifact(newArtifact: Artifact, artifacts: Seq[Artifact], authContext: AuthContext) extends EventMessage @@ -44,12 +33,14 @@ class CaseMergeSrv @Inject() ( private[CaseMergeSrv] lazy val logger = Logger(getClass) import org.elastic4play.services.QueryDSL._ + private[services] def concat[E](entities: Seq[E], sep: String, getId: E ⇒ Long, getStr: E ⇒ String) = { JsString(entities.map(e ⇒ s"#${getId(e)}:${getStr(e)}").mkString(sep)) } private[services] def concatCaseDescription(cases: Seq[Case]) = { val str = cases + .filterNot(_.description().trim.isEmpty) .map { caze ⇒ s"#### ${caze.title()} ([#${caze.caseId()}](#/case/${caze.id}/details))\n\n${caze.description()}" } @@ -132,8 +123,13 @@ class CaseMergeSrv @Inject() ( } private[services] def mergeTasksAndLogs(newCase: Case, cases: Seq[Case])(implicit authContext: AuthContext): Future[Done] = { - val (tasks, futureTaskCount) = taskSrv.find(and(parent("case", withId(cases.map(_.id): _*)), "status" ~!= TaskStatus.Cancel), Some("all"), Nil) + val (tasks, futureTaskCount) = taskSrv.find(and( + parent("case", withId(cases.map(_.id): _*)), + "status" ~!= TaskStatus.Cancel, + "status" ~!= TaskStatus.Waiting), Some("all"), Nil) + futureTaskCount.foreach(count ⇒ logger.info(s"Creating $count task(s):")) + tasks .mapAsyncUnordered(5) { task ⇒ taskSrv.create(newCase, baseFields(task)).map(task → _) } .flatMapConcat { @@ -151,6 +147,28 @@ class CaseMergeSrv @Inject() ( logSrv.create(task, fields) } .runWith(Sink.ignore) + .andThen { + case _ ⇒ + taskSrv.find(and( + parent("case", withId(cases.map(_.id): _*)), + "status" ~= TaskStatus.Waiting), Some("all"), Nil) + ._1 + .fold(Seq.empty[Task]) { + case (uniqueTasks, task) if !uniqueTasks.exists(_.title() == task.title()) ⇒ + uniqueTasks :+ task + case (uniqueTasks, _) ⇒ uniqueTasks + } + .map(_.map(baseFields)) + .mapAsyncUnordered(5) { tasksFields ⇒ + taskSrv.create(newCase, tasksFields) + } + .mapConcat(_.toList) + .map { + case Failure(error) ⇒ logger.warn("Task creation fails", error) + case _ ⇒ + } + .runWith(Sink.ignore) + } } private[services] def mergeArtifactStatus(artifacts: Seq[Artifact]) = { @@ -197,7 +215,7 @@ class CaseMergeSrv @Inject() ( .set("message", concat[Artifact](sameArtifacts, "\n \n", a ⇒ caseMap(a.parentId.get).caseId(), _.message())) .set("startDate", firstDate(sameArtifacts.map(_.startDate()))) .set("tlp", JsNumber(sameArtifacts.map(_.tlp()).min)) - .set("tags", JsArray(sameArtifacts.flatMap(_.tags()).map(JsString))) + .set("tags", JsArray(sameArtifacts.flatMap(_.tags()).distinct.map(JsString))) .set("ioc", JsBoolean(sameArtifacts.map(_.ioc()).reduce(_ || _))) .set("status", mergeArtifactStatus(sameArtifacts)) // Merged artifact is created under new case diff --git a/thehive-misp/app/connectors/misp/MispSrv.scala b/thehive-misp/app/connectors/misp/MispSrv.scala index c8693177af..6cc545bec5 100644 --- a/thehive-misp/app/connectors/misp/MispSrv.scala +++ b/thehive-misp/app/connectors/misp/MispSrv.scala @@ -338,7 +338,8 @@ class MispSrv @Inject() ( case None ⇒ for { instanceConfig ← getInstanceConfig(alert.source()) - caze ← caseSrv.create(Fields(alert.toCaseJson)) + caseTemplate ← alertSrv.getCaseTemplate(alert) + caze ← caseSrv.create(Fields(alert.toCaseJson), caseTemplate) _ ← alertSrv.setCase(alert, caze) artifacts ← Future.sequence(alert.artifacts().flatMap(attributeToArtifact(instanceConfig, alert, _))) _ ← artifactSrv.create(caze, artifacts) diff --git a/ui/app/scripts/controllers/admin/AdminCaseTemplatesCtrl.js b/ui/app/scripts/controllers/admin/AdminCaseTemplatesCtrl.js index 230791a9d0..6032f443ab 100644 --- a/ui/app/scripts/controllers/admin/AdminCaseTemplatesCtrl.js +++ b/ui/app/scripts/controllers/admin/AdminCaseTemplatesCtrl.js @@ -139,7 +139,6 @@ }; $scope.createTemplate = function() { - console.log("Create Template: " + $scope.template.name); return TemplateSrv.save($scope.template, function() { $scope.getList(0); @@ -152,10 +151,9 @@ }; $scope.updateTemplate = function() { - console.log("Update Template: " + $scope.template.name); return TemplateSrv.update({ templateId: $scope.template.id - }, _.omit($scope.template, ['id', 'user', 'type']), function() { + }, _.omit($scope.template, ['id', 'user', '_type']), function() { $scope.getList($scope.templateIndex); $scope.$emit('templates:refresh'); diff --git a/ui/app/scripts/controllers/admin/AdminObservablesCtrl.js b/ui/app/scripts/controllers/admin/AdminObservablesCtrl.js index 5c645f55f1..37627e5e3f 100644 --- a/ui/app/scripts/controllers/admin/AdminObservablesCtrl.js +++ b/ui/app/scripts/controllers/admin/AdminObservablesCtrl.js @@ -46,7 +46,6 @@ ListSrv['delete']({ 'listId': datatype.id }, function(data) { - console.log(data); NotificationSrv.log('The datatype ' + datatype.value + ' has been removed', 'success'); $scope.load(); }, function(response) { diff --git a/ui/app/scripts/controllers/case/CaseObservablesCtrl.js b/ui/app/scripts/controllers/case/CaseObservablesCtrl.js index e669a2c8db..40f312bc46 100644 --- a/ui/app/scripts/controllers/case/CaseObservablesCtrl.js +++ b/ui/app/scripts/controllers/case/CaseObservablesCtrl.js @@ -46,6 +46,10 @@ $scope.uiSrv.setPageSize(newValue); }); + $scope.keys = function(obj) { + return _.keys(obj || {}); + }; + $scope.toggleStats = function () { $scope.uiSrv.toggleStats(); }; diff --git a/ui/app/scripts/controllers/case/CaseObservablesItemCtrl.js b/ui/app/scripts/controllers/case/CaseObservablesItemCtrl.js index a19ce2b914..9bed9f8e13 100644 --- a/ui/app/scripts/controllers/case/CaseObservablesItemCtrl.js +++ b/ui/app/scripts/controllers/case/CaseObservablesItemCtrl.js @@ -107,7 +107,6 @@ return item.base.details; }), 'status'); - console.log(statuses); if(statuses.indexOf('Success') > -1) { CaseArtifactSrv.api().get({ 'artifactId': observableId diff --git a/ui/app/scripts/controllers/case/ObservableCreationCtrl.js b/ui/app/scripts/controllers/case/ObservableCreationCtrl.js index 8f4a07ac1e..3fc95327e4 100644 --- a/ui/app/scripts/controllers/case/ObservableCreationCtrl.js +++ b/ui/app/scripts/controllers/case/ObservableCreationCtrl.js @@ -105,12 +105,10 @@ data: observable.object.data, type: observable.type }; - }); + }); }; $scope.handleSaveSuccess = function(response) { - console.log('Observable create modal closed'); - var success = 0, failure = 0; diff --git a/ui/app/scripts/directives/utils/autofocus.js b/ui/app/scripts/directives/utils/autofocus.js index f768fa618d..15397c1a06 100644 --- a/ui/app/scripts/directives/utils/autofocus.js +++ b/ui/app/scripts/directives/utils/autofocus.js @@ -6,14 +6,13 @@ return { restrict: 'A', link: function ($scope, $element, attr) { - + $scope.$on(attr.autofocus, function() { - console.log('Event received = ' + attr.autofocus); $timeout(function() { $element[0].focus(); }, 0); - - }); + + }); } } }); diff --git a/ui/app/scripts/services/ObservablesUISrv.js b/ui/app/scripts/services/ObservablesUISrv.js index a52d85389f..c36080e08f 100644 --- a/ui/app/scripts/services/ObservablesUISrv.js +++ b/ui/app/scripts/services/ObservablesUISrv.js @@ -150,17 +150,6 @@ convertFn = filterDef.convert || angular.identity; // Prepare the filter value - /* - if(factory.hasFilter(field)) { - var oldValue = factory.getFilterValue(field); - console.log('Filter ['+field+'] already exists = ' + oldValue); - - if(factory.filterDefs[field].type === 'list') { - value = angular.isArray(oldValue) ? oldValue.push({text: value}) : [{text: oldValue}, {text: value}]; - } - } - */ - if (field === 'keyword') { query = value; } else if (angular.isArray(value) && value.length > 0) { diff --git a/ui/app/views/directives/tag-list.html b/ui/app/views/directives/tag-list.html index 9d033825f7..aaf6a2d7c8 100644 --- a/ui/app/views/directives/tag-list.html +++ b/ui/app/views/directives/tag-list.html @@ -1,3 +1,3 @@
- {{tag}} + {{tag}}
diff --git a/ui/app/views/partials/alert/event.dialog.html b/ui/app/views/partials/alert/event.dialog.html index f6f855e205..5580f98304 100644 --- a/ui/app/views/partials/alert/event.dialog.html +++ b/ui/app/views/partials/alert/event.dialog.html @@ -36,7 +36,10 @@

Tags
- +
+ None + {{tag}} +
diff --git a/ui/app/views/partials/alert/list.html b/ui/app/views/partials/alert/list.html index 8bd87f364e..e083799f1e 100644 --- a/ui/app/views/partials/alert/list.html +++ b/ui/app/views/partials/alert/list.html @@ -80,9 +80,9 @@

List of alerts ({{$vm.list.total || 0}} of {{alertEvents.c
- Tags: + None - {{tag}} + {{tag}}
{{event.source}} diff --git a/ui/app/views/partials/case/case.list.html b/ui/app/views/partials/case/case.list.html index 9adc32d5e6..75a9dd3ac7 100644 --- a/ui/app/views/partials/case/case.list.html +++ b/ui/app/views/partials/case/case.list.html @@ -71,9 +71,9 @@

List of cases ({{$vm.list.total || 0}} of {{$vm.caseStats. #{{currentCase.caseId}} - {{currentCase.title}}
- Tags: + None - {{tag}} + {{tag}}
diff --git a/ui/app/views/partials/observables/list/artifacts-list-main.html b/ui/app/views/partials/observables/list/artifacts-list-main.html index debe36a6e4..322f210546 100644 --- a/ui/app/views/partials/observables/list/artifacts-list-main.html +++ b/ui/app/views/partials/observables/list/artifacts-list-main.html @@ -32,7 +32,7 @@

List of observables ({{artifacts.total || 0}} of {{artifactStats.count}}) - +
@@ -43,7 +43,7 @@

List of observables ({{artifacts.total || 0}} of {{artifactStats.count}})

- + @@ -60,18 +60,27 @@

List of observables ({{artifacts.total || 0}} of {{artifactStats.count}})

+ - diff --git a/ui/bower.json b/ui/bower.json index 01bbc49fb2..046b3b5cb0 100644 --- a/ui/bower.json +++ b/ui/bower.json @@ -1,6 +1,6 @@ { "name": "thehive", - "version": "2.11.0", + "version": "2.11.1", "license": "AGPL-3.0", "dependencies": { "angular": "1.5.8", diff --git a/ui/package.json b/ui/package.json index fd2e217571..199bfa4202 100644 --- a/ui/package.json +++ b/ui/package.json @@ -1,6 +1,6 @@ { "name": "thehive", - "version": "2.11.0", + "version": "2.11.1", "license": "AGPL-3.0", "repository": { "type": "git", diff --git a/version.sbt b/version.sbt index 2cb8c0de54..06d4f85e70 100644 --- a/version.sbt +++ b/version.sbt @@ -1 +1 @@ -version in ThisBuild := "2.11.0" +version in ThisBuild := "2.11.1"
Type Data/Filename TagsAnalysis Date added
- + + +
+ {{(artifact.data | fang) || (artifact.attachment.name | fang)}} +
+
+ + None + + {{tag}} + +
{{(artifact.data | fang) || (artifact.attachment.name | fang)}} - - - - - + - {{artifact.startDate | shortDate}} + {{artifact.startDate | shortDate}}