Git Product home page Git Product logo

docker's Introduction

OpenCTI Docker deployment

Documentation

You can find the detailed documentation about the Docker installation in the OpenCTI documentation space.

Community

Status & bugs

Currently OpenCTI is under heavy development, if you wish to report bugs or ask for new features, you can directly use the Github issues module.

Discussion

If you need support or you wish to engage a discussion about the OpenCTI platform, feel free to join us on our Slack channel. You can also send us an email to [email protected].

About

OpenCTI is a product designed and developed by the company Filigran.

docker's People

Contributors

00willo avatar 2xyo avatar filigran-automation avatar graememeyergt avatar jeremycloarec avatar mavam avatar mkdemir avatar mschreib28 avatar nor3th avatar peasead avatar renovate[bot] avatar rhaist avatar richard-julien avatar rolandpeelen avatar samuelhassine avatar sarahbocognano avatar synchroack avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

docker's Issues

port 8080 no work

Hi!
Some reason why port 8080 is not working
I have followed all instructions step by step many times, and only port 9000 works
I have "reinstalled" (remove docker completely) and it doesn't connect. Only once time it connected (without doing anything different, step by step) but when i restarted the service, it didn't connect the 8080.
Regards and thanks very much!

Url configuration must be configured

Hi

im trying to run sudo docker-compose up

i see multiple errors for "Url configuration must be configured"
where do i configured the URL?
its not in the .env file

Could not load credentials from any providers

Need help to figure out how I can solve this. I have deployed the server in Azure Cloud, used docker-compose.yml and stack deploy.

{"category":"APP","level":"info","message":"[OPENCTI] Starting platform","timestamp":"2023-01-18T02:29:10.979Z","version":"5.5.2"}
{"category":"APP","level":"info","message":"[OPENCTI] Checking dependencies statuses","timestamp":"2023-01-18T02:29:10.982Z","version":"5.5.2"}
{"category":"APP","level":"info","message":"[SEARCH] Elasticsearch (8.5.3) client selected / runtime sorting enabled","timestamp":"2023-01-18T02:29:11.047Z","version":"5.5.2"}
{"category":"APP","level":"info","message":"[CHECK] Search engine is alive","timestamp":"2023-01-18T02:29:11.049Z","version":"5.5.2"}
{"category":"APP","error":{"context":{},"message":"Could not load credentials from any providers","name":"CredentialsProviderError","stack":"CredentialsProviderError: Could not load credentials from any providers\n at provider (/opt/opencti/build/src/database/file-storage.js:45:13)\n at /opt/opencti/build/node_modules/@aws-sdk/property-provider/dist-cjs/chain.js:11:28\n at processTicksAndRejections (node:internal/process/task_queues:95:5)\n at coalesceProvider (/opt/opencti/build/node_modules/@aws-sdk/property-provider/dist-cjs/memoize.js:14:24)\n at _Be.credentialProvider (/opt/opencti/build/node_modules/@aws-sdk/property-provider/dist-cjs/memoize.js:33:24)\n at _Be.signRequest (/opt/opencti/build/node_modules/@aws-sdk/signature-v4/dist-cjs/SignatureV4.js:86:29)\n at /opt/opencti/build/node_modules/@aws-sdk/middleware-signing/dist-cjs/middleware.js:16:18\n at /opt/opencti/build/node_modules/@aws-sdk/middleware-retry/dist-cjs/retryMiddleware.js:27:46\n at /opt/opencti/build/node_modules/@aws-sdk/middleware-logger/dist-cjs/loggerMiddleware.js:5:22\n at initializeBucket (/opt/opencti/build/src/database/file-storage.js:76:5)\n at checkSystemDependencies (/opt/opencti/build/src/initialization.js:132:3)\n at boot (/opt/opencti/build/src/boot.js:10:5)"},"level":"error","message":"[OPENCTI] Platform start fail","timestamp":"2023-01-18T02:29:11.088Z","version":"5.5.2"}

version: '3'
#networks:

ext0:

external: true

services:
redis:
container_name: opencti-redis
image: redis:7.0.6
ports:
- 6379:6379
restart: always
volumes:
- redisdata:/data:rw
elasticsearch:
container_name: opencti-elasticsearch
image: docker.elastic.co/elasticsearch/elasticsearch:8.5.3
volumes:
- esdata:/usr/share/elasticsearch/data:rw
ports:
- 9200:9200/tcp
- 9300:9300/tcp
environment:
#- bootstrap.memory_lock=true
#- http.cors.enabled=true
#- http.cors.allow-origin=*
# - ELASTICSEARCH_SSL_CA=/home/ansible/.opencti-ssl/opencti-play.pem
# Comment out the line below for single-node
- discovery.type=single-node
# Uncomment line below below for a cluster of multiple nodes
# - cluster.name=docker-cluster
- xpack.ml.enabled=false
- xpack.security.enabled=false
#- "ES_JAVA_OPTS=-Xms${ELASTIC_MEMORY_SIZE} -Xmx${ELASTIC_MEMORY_SIZE}"
restart: always
ulimits:
memlock:
soft: -1
hard: -1
nofile:
soft: 65536
hard: 65536
opencti-kibana:
container_name: opencti-kibana
image: docker.elastic.co/kibana/kibana:8.5.3
environment:
- ELASTICSEARCH_HOSTS=http://opencti-elasticsearch:9200
restart: always
ports:
- 5601:5601
depends_on:
- opencti-elasticsearch
minio:
container_name: opencti-minio
image: minio/minio:RELEASE.2022-09-25T15-44-53Z
volumes:
- s3data:/data:rw
ports:
- 9001:9001/tcp
- 9000:9000/tcp
- 127.0.0.1:50001:9000/tcp
environment:
MINIO_ROOT_USER: ${MINIO_ROOT_USER}
MINIO_ROOT_PASSWORD: ${MINIO_ROOT_PASSWORD}
command: server /data --console-address ":9001"
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:9000/minio/health/live"]
interval: 30s
timeout: 20s
retries: 3
restart: always
rabbitmq:
container_name: opencti-rabbitmq
image: rabbitmq:3.11-management
environment:
- RABBITMQ_DEFAULT_USER=${RABBITMQ_DEFAULT_USER}
- RABBITMQ_DEFAULT_PASS=${RABBITMQ_DEFAULT_PASS}
volumes:
- amqpdata:/var/lib/rabbitmq:rw
ports:
- 5672:5672
- 15672:15672
restart: always
opencti:
container_name: opencti-platform
image: opencti/platform:5.5.2
environment:
- NODE_OPTIONS=--max-old-space-size=8096
- APP__PORT=8080
- APP__BASE_URL=${OPENCTI_BASE_URL}
# - APP__REACTIVE=true
- APP__ADMIN__EMAIL=${OPENCTI_ADMIN_EMAIL}
- APP__ADMIN__PASSWORD=${OPENCTI_ADMIN_PASSWORD}
- APP__ADMIN__TOKEN=${OPENCTI_ADMIN_TOKEN}
- APP__APP_LOGS__LOGS_LEVEL=debug
# - APP__LOGS=/var/log/opencti
- REDIS__HOSTNAME=redis
- REDIS__PORT=6379
- ELASTICSEARCH__URL=http://opencti-elasticsearch:9200
# - ELASTICSEARCH_SSL_CA=/home/ansible/.opencti-ssl/opencti-play.pem
- MINIO__ENDPOINT=minio
- MINIO__PORT=9001
- MINIO__USE_SSL=false
- MINIO__ACCESS_KEY=${MINIO_ROOT_USER}
- MINIO__SECRET_KEY=${MINIO_ROOT_PASSWORD}
- RABBITMQ__HOSTNAME=rabbitmq
- RABBITMQ__PORT=5672
- RABBITMQ__PORT_MANAGEMENT=15672
- RABBITMQ__MANAGEMENT_SSL=false
- RABBITMQ__USERNAME=${RABBITMQ_DEFAULT_USER}
- RABBITMQ__PASSWORD=${RABBITMQ_DEFAULT_PASS}
- SMTP__HOSTNAME=${SMTP_HOSTNAME}
- SMTP__PORT=25
- PROVIDERS__LOCAL__STRATEGY=LocalStrategy
# - APP__HTTPS_CERT__CA='["${SSL_ROOT_CA}"]'
volumes:
- ${SSL_ROOT_CA}:/etc/ssl/certs/opencti.crt:ro
networks:
default:
ports:
- 8080:8080/tcp
- 127.0.0.1:50000:8080/tcp
labels:
- "ext0.enable=true"
- "ext0.http.routers.opencti.rule=Host(${DOCKER_IP})"
- "ext0.http.routers.opencti.entrypoints=https"
- "ext0.http.services.opencti.loadbalancer.server.port=8080"
depends_on:
- redis
- elasticsearch
- minio
- rabbitmq
restart: always
deploy:
placement:
constraints:
- "node.role==manager"
worker:
image: opencti/worker:5.5.2
environment:
- OPENCTI_URL=http://127.0.0.1:8080
- OPENCTI_TOKEN=${OPENCTI_ADMIN_TOKEN}
- WORKER_LOG_LEVEL=info
depends_on:
- opencti
deploy:
mode: replicated
replicas: 3
restart: always
connector-export-file-stix:
image: opencti/connector-export-file-stix:5.5.2
environment:
- OPENCTI_URL=http://opencti:8080
- OPENCTI_TOKEN=${OPENCTI_ADMIN_TOKEN}
- CONNECTOR_ID=${CONNECTOR_EXPORT_FILE_STIX_ID}
- CONNECTOR_TYPE=INTERNAL_EXPORT_FILE
- CONNECTOR_NAME=ExportFileStix2
- CONNECTOR_SCOPE=application/json
- CONNECTOR_CONFIDENCE_LEVEL=3 # From 0 (Unknown) to 100 (Fully trusted)
- CONNECTOR_LOG_LEVEL=info
restart: always
depends_on:
- opencti
connector-export-file-csv:
image: opencti/connector-export-file-csv:5.5.2
environment:
- OPENCTI_URL=http://opencti:8080
- OPENCTI_TOKEN=${OPENCTI_ADMIN_TOKEN}
- CONNECTOR_ID=${CONNECTOR_EXPORT_FILE_CSV_ID}
- CONNECTOR_TYPE=INTERNAL_EXPORT_FILE
- CONNECTOR_NAME=ExportFileCsv
- CONNECTOR_SCOPE=text/csv
- CONNECTOR_CONFIDENCE_LEVEL=3 # From 0 (Unknown) to 100 (Fully trusted)
- CONNECTOR_LOG_LEVEL=info
restart: always
depends_on:
- opencti
connector-export-file-txt:
image: opencti/connector-export-file-txt:5.5.2
environment:
- OPENCTI_URL=http://opencti:8080
- OPENCTI_TOKEN=${OPENCTI_ADMIN_TOKEN}
- CONNECTOR_ID=${CONNECTOR_EXPORT_FILE_TXT_ID}
- CONNECTOR_TYPE=INTERNAL_EXPORT_FILE
- CONNECTOR_NAME=ExportFileTxt
- CONNECTOR_SCOPE=text/plain
- CONNECTOR_CONFIDENCE_LEVEL=3 # From 0 (Unknown) to 100 (Fully trusted)
- CONNECTOR_LOG_LEVEL=info
restart: always
depends_on:
- opencti
connector-import-file-stix:
image: opencti/connector-import-file-stix:5.5.2
environment:
- OPENCTI_URL=http://opencti:8080
- OPENCTI_TOKEN=${OPENCTI_ADMIN_TOKEN}
- CONNECTOR_ID=${CONNECTOR_IMPORT_FILE_STIX_ID}
- CONNECTOR_TYPE=INTERNAL_IMPORT_FILE
- CONNECTOR_NAME=ImportFileStix
- CONNECTOR_VALIDATE_BEFORE_IMPORT=true
- CONNECTOR_SCOPE=application/json,text/xml
- CONNECTOR_AUTO=true
- CONNECTOR_CONFIDENCE_LEVEL=3 # From 0 (Unknown) to 100 (Fully trusted)
- CONNECTOR_LOG_LEVEL=info
restart: always
depends_on:
- opencti
#connector-import-file-pdf-observables:

environment:

- CONNECTOR_CONFIDENCE_LEVEL=3

- CONNECTOR_ID=${CONNECTOR_IMPORT_FILE_PDF_ID}

- CONNECTOR_LOG_LEVEL=info

- CONNECTOR_NAME=ImportFilePdfObservables

- CONNECTOR_SCOPE=application/pdf

- CONNECTOR_TYPE=INTERNAL_IMPORT_FILE

- OPENCTI_TOKEN=${OPENCTI_ADMIN_TOKEN}

- OPENCTI_URL=http://opencti:8080

- PDF_OBSERVABLES_CREATE_INDICATOR=False

image: opencti/connector-import-file-pdf-observables:5.5.2

restart: always

depends_on:

- opencti

connector-import-document:
image: opencti/connector-import-document:5.5.2
environment:
- OPENCTI_URL=http://opencti:8080
- OPENCTI_TOKEN=${OPENCTI_ADMIN_TOKEN}
- CONNECTOR_ID=${CONNECTOR_IMPORT_DOCUMENT_ID} # Valid UUIDv4
- CONNECTOR_TYPE=INTERNAL_IMPORT_FILE
- CONNECTOR_NAME=ImportDocument
- CONNECTOR_VALIDATE_BEFORE_IMPORT=true # Validate any bundle before import
- CONNECTOR_SCOPE=application/pdf,text/plain,text/html
- CONNECTOR_AUTO=true # Enable/disable auto-import of file
- CONNECTOR_ONLY_CONTEXTUAL=false # Only extract data related to an entity (a report, a threat actor, etc.)
- CONNECTOR_CONFIDENCE_LEVEL=3 # From 0 (Unknown) to 100 (Fully trusted)
- CONNECTOR_LOG_LEVEL=info
- IMPORT_DOCUMENT_CREATE_INDICATOR=true
restart: always
depends_on:
- opencti

volumes:
esdata:
s3data:
redisdata:
amqpdata:

ssldata:

Update docker documentation for running in various environments

Use case

Update the existing OpenCTI/docker documentation to include dev/prod environments to include capabilities for easier local development of the front-end as well as default overrides.

Current Workaround

Manual manipulation of the docker-compose.yml is required to run various settings within docker-compose

Proposed Solution

Utilize docker-compose.override.yml and add documentation on how to run the different docker-compose.*.yml versions based on environment needs.

Additional Information

N/A

If the feature request is approved, would you be willing to submit a PR?

Yes.

Token configuration must be the same as APP__ADMIN__TOKEN

For a simple try I just copied .env.sample to .env file.

Steps to reproduce:

  1. cp .env.sample .env
  2. docker-compose up

So, this error occours for:

  • connector-history
  • connector-export-file-stix
  • connector-import-file-stix
  • connector-import-file-pdf-observables

docker version: 19.03.14 or 20.10.2
docker-compose version: 1.28.0

logging:

Starting docker_minio_1                                 ... done
Starting docker_connector-history_1                     ... done
Starting docker_elasticsearch_1                         ... done
Starting docker_connector-import-file-stix_1            ... done
Starting docker_connector-import-file-pdf-observables_1 ... done
Starting docker_redis_1                                 ... done
Starting docker_connector-export-file-stix_1            ... done
Starting docker_rabbitmq_1                              ... done
Starting docker_connector-export-file-csv_1             ... done
Starting docker_opencti_1                               ... done
Starting docker_worker_1                                ... done
Starting docker_worker_2                                ... done
Starting docker_worker_3                                ... done
Attaching to docker_connector-import-file-pdf-observables_1, docker_connector-import-file-stix_1, docker_connector-history_1, docker_minio_1, docker_elasticsearch_1, docker_redis_1, docker_connector-export-file-csv_1, docker_rabbitmq_1, docker_connector-export-file-stix_1, docker_opencti_1, docker_worker_1, docker_worker_2, docker_worker_3
connector-history_1                      | Traceback (most recent call last):
connector-history_1                      |   File "history.py", line 70, in <module>
connector-history_1                      |     HistoryInstance = HistoryConnector()
connector-history_1                      |   File "history.py", line 20, in __init__
connector-history_1                      |     self.helper = OpenCTIConnectorHelper(config)
connector-history_1                      |   File "/usr/local/lib/python3.8/site-packages/pycti/connector/opencti_connector_helper.py", line 308, in __init__
connector-history_1                      |     self.api = OpenCTIApiClient(
connector-history_1                      |   File "/usr/local/lib/python3.8/site-packages/pycti/api/opencti_api_client.py", line 89, in __init__
connector-history_1                      |     raise ValueError(
connector-history_1                      | ValueError: Token configuration must be the same as APP__ADMIN__TOKEN
minio_1                                  | 
minio_1                                  |  You are running an older version of MinIO released 1 month ago 
minio_1                                  |  Update: Run `mc admin update` 
minio_1                                  | 
minio_1                                  | 
minio_1                                  | Endpoint:  http://172.18.0.5:9000  http://127.0.0.1:9000
minio_1                                  | 
minio_1                                  | Browser Access:
minio_1                                  |    http://172.18.0.5:9000  http://127.0.0.1:9000
minio_1                                  | 
minio_1                                  | Object API (Amazon S3 compatible):
minio_1                                  |    Go:         https://docs.min.io/docs/golang-client-quickstart-guide
minio_1                                  |    Java:       https://docs.min.io/docs/java-client-quickstart-guide
minio_1                                  |    Python:     https://docs.min.io/docs/python-client-quickstart-guide
minio_1                                  |    JavaScript: https://docs.min.io/docs/javascript-client-quickstart-guide
minio_1                                  |    .NET:       https://docs.min.io/docs/dotnet-client-quickstart-guide
opencti_1                                | yarn run v1.19.1
opencti_1                                | $ node build/index.js
redis_1                                  | 1:C 22 Jan 2021 17:56:48.939 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
redis_1                                  | 1:C 22 Jan 2021 17:56:48.939 # Redis version=6.0.9, bits=64, commit=00000000, modified=0, pid=1, just started
redis_1                                  | 1:C 22 Jan 2021 17:56:48.939 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf
redis_1                                  | 1:M 22 Jan 2021 17:56:48.940 * Running mode=standalone, port=6379.
redis_1                                  | 1:M 22 Jan 2021 17:56:48.940 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.
redis_1                                  | 1:M 22 Jan 2021 17:56:48.940 # Server initialized
redis_1                                  | 1:M 22 Jan 2021 17:56:48.940 # WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.
redis_1                                  | 1:M 22 Jan 2021 17:56:48.940 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo madvise > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled (set to 'madvise' or 'never').
redis_1                                  | 1:M 22 Jan 2021 17:56:48.943 * Loading RDB produced by version 6.0.9
redis_1                                  | 1:M 22 Jan 2021 17:56:48.943 * RDB age 68 seconds
redis_1                                  | 1:M 22 Jan 2021 17:56:48.943 * RDB memory usage when created 0.77 Mb
redis_1                                  | 1:M 22 Jan 2021 17:56:48.943 * DB loaded from disk: 0.000 seconds
redis_1                                  | 1:M 22 Jan 2021 17:56:48.943 * Ready to accept connections
worker_1                                 | Traceback (most recent call last):
worker_1                                 |   File "worker.py", line 281, in <module>
worker_1                                 |     worker = Worker()
worker_1                                 |   File "worker.py", line 208, in __init__
worker_1                                 |     self.opencti_url, self.opencti_token, self.log_level
worker_1                                 |   File "/usr/local/lib/python3.7/site-packages/pycti/api/opencti_api_client.py", line 90, in __init__
worker_1                                 |     "Token configuration must be the same as APP__ADMIN__TOKEN"
worker_1                                 | ValueError: Token configuration must be the same as APP__ADMIN__TOKEN
worker_3                                 | Traceback (most recent call last):
worker_3                                 |   File "worker.py", line 281, in <module>
worker_3                                 |     worker = Worker()
worker_3                                 |   File "worker.py", line 208, in __init__
worker_3                                 |     self.opencti_url, self.opencti_token, self.log_level
worker_3                                 |   File "/usr/local/lib/python3.7/site-packages/pycti/api/opencti_api_client.py", line 90, in __init__
worker_3                                 |     "Token configuration must be the same as APP__ADMIN__TOKEN"
worker_3                                 | ValueError: Token configuration must be the same as APP__ADMIN__TOKEN
worker_2                                 | Traceback (most recent call last):
worker_2                                 |   File "worker.py", line 281, in <module>
worker_2                                 |     worker = Worker()
worker_2                                 |   File "worker.py", line 208, in __init__
worker_2                                 |     self.opencti_url, self.opencti_token, self.log_level
worker_2                                 |   File "/usr/local/lib/python3.7/site-packages/pycti/api/opencti_api_client.py", line 90, in __init__
worker_2                                 |     "Token configuration must be the same as APP__ADMIN__TOKEN"
worker_2                                 | ValueError: Token configuration must be the same as APP__ADMIN__TOKEN
connector-import-file-pdf-observables_1  | Token configuration must be the same as APP__ADMIN__TOKEN
connector-import-file-stix_1             | Token configuration must be the same as APP__ADMIN__TOKEN
docker_connector-history_1 exited with code 1
connector-export-file-csv_1              | Token configuration must be the same as APP__ADMIN__TOKEN
connector-export-file-stix_1             | Token configuration must be the same as APP__ADMIN__TOKEN
opencti_1                                | {"error":{"name":"ConfigurationError","_error":{},"_showLocations":false,"_showPath":false,"time_thrown":"2021-01-22T17:57:02.229Z","data":{"reason":"ElasticSearch seems down","category":"technical"},"internalData":{}},"level":"error","message":"[OPENCTI] Platform initialization fail","timestamp":"2021-01-22T17:57:02.229Z"}
opencti_1                                | error Command failed with exit code 1.
opencti_1                                | info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command.
docker_worker_1 exited with code 1
docker_worker_3 exited with code 1
docker_worker_2 exited with code 1
worker_3                                 | Traceback (most recent call last):
worker_3                                 |   File "worker.py", line 281, in <module>
worker_3                                 |     worker = Worker()
worker_3                                 |   File "worker.py", line 208, in __init__
worker_3                                 |     self.opencti_url, self.opencti_token, self.log_level
worker_3                                 |   File "/usr/local/lib/python3.7/site-packages/pycti/api/opencti_api_client.py", line 90, in __init__
worker_3                                 |     "Token configuration must be the same as APP__ADMIN__TOKEN"
worker_3                                 | ValueError: Token configuration must be the same as APP__ADMIN__TOKEN
worker_1                                 | Traceback (most recent call last):
worker_1                                 |   File "worker.py", line 281, in <module>
worker_1                                 |     worker = Worker()
worker_1                                 |   File "worker.py", line 208, in __init__
worker_1                                 |     self.opencti_url, self.opencti_token, self.log_level
worker_1                                 |   File "/usr/local/lib/python3.7/site-packages/pycti/api/opencti_api_client.py", line 90, in __init__
worker_1                                 |     "Token configuration must be the same as APP__ADMIN__TOKEN"
worker_1                                 | ValueError: Token configuration must be the same as APP__ADMIN__TOKEN
worker_2                                 | Traceback (most recent call last):
worker_2                                 |   File "worker.py", line 281, in <module>
worker_2                                 |     worker = Worker()
worker_2                                 |   File "worker.py", line 208, in __init__
worker_2                                 |     self.opencti_url, self.opencti_token, self.log_level
worker_2                                 |   File "/usr/local/lib/python3.7/site-packages/pycti/api/opencti_api_client.py", line 90, in __init__
worker_2                                 |     "Token configuration must be the same as APP__ADMIN__TOKEN"
worker_2                                 | ValueError: Token configuration must be the same as APP__ADMIN__TOKEN

Issue with the .env file

Greetings!, i had an issue with an installation of the new version of OpenCTI, i was reading documentation and some videos and no info about this variable

OPENCTI_BASE_URL=http://localhost:8080

i am using a manager and a worker, that string 'localhost' needs to be changed? because localhost will redirect the GUI to a localhost resolved IP, but the manager cant connect to GUI and the Worker doesnt have a container which post the service.

Simple question : the string 'localhost' may be changed? which IP i need to put there Worker or Manager ?

OpenCTI to feed IOCs (etc) to LogRhythm.

Hi,

I would like to get some guidance of how exactly I can pass on the feeds from the OpenCTI platform (not sure if relevant but runs in Docker) into SIEM, namely LR.
Looking through the previous issues I got some rough idea (Redis, GraphQL API) but I was looking through the docs and code and I am still unsure on how exactly this could be done. Any help appreciated.

Kubernetes Deployment

Is there any directions for deploying OpenCTI and connectors to a Kubernetes Cluster instead of using Docker Swarm?

OpenCTI 5.0.0 docker fails to start ( Cannot find module 'apollo-server-testing')

Description

OpenCTI 5.0.0 upgrade error

Environment

OS (where OpenCTI server runs): { Red Hat Enterprise Linux Server release 7.9 (Maipo) with Docker Compose }
OpenCTI version: { OpenCTI 5.0.0}
OpenCTI client: { Frontend }
Other environment details:
Elasticsearch 7.14.1
RabbitMQ 3.8
Redis 6.2.5
MinIO RELEASE.2021-08-25T00-41-18Z

Reproducible Steps

Steps to create the smallest reproducible scenario:

  1. update thedocker-compose.yml file with the new version number of container images (from 4.5.5. to 5.0.0)
  2. sudo docker-compose up -d
  3. Some containers remain in restarting mode

Expected Output

OpenCTI UP and Running

Actual Output

OpenCTI does not start.

Analyzing the connectors containers log we have the following error:
_ValueError: OpenCTI API is not reachable. Please wait for OpenCTI API to start or check the configuration..._

Analyzing the OpenCTI container log we have the following error:

#docker logs docker_opencti_1
Error: Cannot find module 'apollo-server-testing'
Require stack:
- /opt/opencti/build/index.js
    at Function.Module._resolveFilename (node:internal/modules/cjs/loader:933:15)
    at Function.Module._load (node:internal/modules/cjs/loader:778:27)
    at Module.require (node:internal/modules/cjs/loader:1005:19)
    at require (node:internal/modules/cjs/helpers:94:18)
    at Object.moduleId [as 866] (/opt/opencti/build/external "apollo-server-testing":1:38)
    at __webpack_require__ (/opt/opencti/build/webpack/bootstrap:19:22)
    at /opt/opencti/build/webpack/startup:3:1
    at Object.<anonymous> (/opt/opencti/build/index.js:1:2529377)
    at Module._compile (node:internal/modules/cjs/loader:1101:14)
    at Object.Module._extensions..js (node:internal/modules/cjs/loader:1153:10)
    at Module.load (node:internal/modules/cjs/loader:981:32)
    at Function.Module._load (node:internal/modules/cjs/loader:822:12)
    at Function.executeUserEntryPoint [as runMain] (node:internal/modules/run_main:79:12)
    at node:internal/main/run_main_module:17:47

Add wait-for-it to reduce errors at the startup

In order to reduce error logs at the startup of the docker-compose up, could you add a wrapper script such as wait-for-it: command: ["./wait-for-it.sh", "opencti:8080", "--", "/entrypoint.sh"]

Documentation: https://docs.docker.com/compose/startup-order/

Sample of the error just because the API is not up at the startup:

connector-mitre_1             | Traceback (most recent call last):
connector-mitre_1             |   File "mitre.py", line 88, in <module>
connector-mitre_1             |     mitreConnector = Mitre()
connector-mitre_1             |   File "mitre.py", line 15, in __init__
connector-mitre_1             |     self.helper = OpenCTIConnectorHelper(config)
connector-mitre_1             |   File "/usr/local/lib/python3.7/site-packages/pycti/connector/opencti_connector_helper.py", line 150, in __init__
connector-mitre_1             |     self.api = OpenCTIApiClient(self.opencti_url, self.opencti_token, self.log_level)
connector-mitre_1             |   File "/usr/local/lib/python3.7/site-packages/pycti/api/opencti_api_client.py", line 106, in __init__
connector-mitre_1             |     raise ValueError('OpenCTI API seems down')
connector-mitre_1             | ValueError: OpenCTI API seems down

Improve init.py script

The init.py [1] script a quick & dirty way designed to speed up the setup phase for launching OpenCTI with docker and to avoid too many manual steps. Capabilities:

  • Password generation
  • UUID4 generation for the connectors
  • saves all that for the predefined environment variables in the local .env file

Ideas for further development:

  • Code refactoring
  • Take docker-compose file as command line argument
  • Propose connectors, directly download the relevant connector docker configs and embed in the OpenCTI docker-compose config
  • ...

If anybody would like to invest some time and energy, feel free to submit PRs to improve the script.

Thanks

[1] https://github.com/OpenCTI-Platform/docker/blob/master/init.py

Question regarding anomalous RabbitMQ data

I'm experiencing an issue in our OpenCTI deployment in which the data volume of the RabbitMQ service is enormous:

109G	./data/amqpdata/mnesia/rabbit@rabbit_node_1/msg_stores/vhosts/<LONG_HEX_NUMBER>/queues/<OTHER_LONG_HEX_NUMBER>
109G	./data/amqpdata/mnesia/rabbit@rabbit_node_1/msg_stores/vhosts/<LONG_HEX_NUMBER>/queues
17G	./data/amqpdata/mnesia/rabbit@rabbit_node_1/msg_stores/vhosts/<LONG_HEX_NUMBER>/msg_store_persistent
125G	./data/amqpdata/mnesia/rabbit@rabbit_node_1/msg_stores/vhosts/<LONG_HEX_NUMBER>

The configuration of the docker-compose.yml says the following for the service:

  rabbitmq:
    image: rabbitmq:3.10-management
    environment:
      - RABBITMQ_DEFAULT_USER=${RABBITMQ_DEFAULT_USER}
      - RABBITMQ_DEFAULT_PASS=${RABBITMQ_DEFAULT_PASS}
    volumes:
      - amqpdata:/var/lib/rabbitmq
    restart: always
    hostname: rabbit_node_1 # Line added to avoid having different identifiers for the node that are kept and no longer used.

The addition of the hostname line did not solve the problem since data in that volume keeps increasing under the data folder for data node.

I guess that my issue is probably not related with OpenCTI but with tasks that are being queued and kept pending (or something similar) but at the same time I'm not experimenting losses of data since things seem (I highlight, seem) to being ingested normally and enrichments are performed properly using VT, AbuseIPDB and others.

Do you have any idea of how to deal with this space issues? Do you know if there is something that I can add to limit the space and discard, for example, old tasks or something? Since I'm not an expert on RabbitMQ and my background with managing these services has been limited to the official docs and generic documents, my only temporal workaround to keep the internal service alive (don't laugh) has been to periodically freeing the data manually to let the platform be alive but, as you can imagine, I'm not comfortable at all since I'm sure that by doing so I'm forcing the deletion of queued tasks and, probably, streaming issues (which, as I'm not using them, I'm not experimenting).

OpenCti unable to start, webui not loading at 8080

Hi, I installed opencti via docker using the following steps.
sudo apt-get install docker-compose
$ mkdir -p /home/bakhtawar/Documents && cd /home/bakhtawar/Documents
$ git clone https://github.com/OpenCTI-Platform/docker.git
$ cd docker
Directly setting parameters in docker-compose.yml
sudo sysctl -w vm.max_map_count=1048575
docker swarm init --advertise-addr
sudo docker stack deploy --compose-file docker-compose.yml opencti

sc1
sc2
sc3

This is the docker-compose.yml file

version: '3'
services:
redis:
image: redis:6.2.6
restart: always
volumes:
- redisdata:/data
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.17.1
volumes:
- esdata:/usr/share/elasticsearch/data
environment:
- discovery.type=single-node
- xpack.ml.enabled=false
restart: always
ulimits:
memlock:
soft: -1
hard: -1
nofile:
soft: 65536
hard: 65536
minio:
image: minio/minio:RELEASE.2022-02-26T02-54-46Z
volumes:
- s3data:/data
ports:
- "9000:9000"
environment:
MINIO_ROOT_USER: xxxxx
MINIO_ROOT_PASSWORD: xxxxx
command: server /data
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:9000/minio/health/live"]
interval: 30s
timeout: 20s
retries: 3
restart: always
rabbitmq:
image: rabbitmq:3.9-management
environment:
- RABBITMQ_DEFAULT_USER=xxxx
- RABBITMQ_DEFAULT_PASS=xxxx
volumes:
- amqpdata:/var/lib/rabbitmq
restart: always
opencti:
image: opencti/platform:5.2.4
environment:
- NODE_OPTIONS=--max-old-space-size=8096
- APP__PORT=8080
- APP__ADMIN__EMAIL=xxxxx
- APP__ADMIN__PASSWORD=xxxxx
- APP__ADMIN__TOKEN=5bfb5640-1edf-4d2f-8389-f75ca01ca01c
- APP__APP_LOGS__LOGS_LEVEL=error
- REDIS__HOSTNAME=redis
- REDIS__PORT=6379
- ELASTICSEARCH__URL=http://elasticsearch:9200
- MINIO__ENDPOINT=minio
- MINIO__PORT=9000
- MINIO__USE_SSL=false
- MINIO__ACCESS_KEY=${MINIO_ROOT_USER}
- MINIO__SECRET_KEY=${MINIO_ROOT_PASSWORD}
- RABBITMQ__HOSTNAME=rabbitmq
- RABBITMQ__PORT=5672
- RABBITMQ__PORT_MANAGEMENT=15672
- RABBITMQ__MANAGEMENT_SSL=false
- RABBITMQ__USERNAME=${RABBITMQ_DEFAULT_USER}
- RABBITMQ__PASSWORD=${RABBITMQ_DEFAULT_PASS}
- SMTP__HOSTNAME=${SMTP_HOSTNAME}
- SMTP__PORT=25
- PROVIDERS__LOCAL__STRATEGY=LocalStrategy
- SUBSCRIPTION_SCHEDULER__ENABLED=false
ports:
- "8080:8080"
depends_on:
- redis
- elasticsearch
- minio
- rabbitmq
restart: always
worker:
image: opencti/worker:5.2.4
environment:
- OPENCTI_URL=http://opencti:8080
- OPENCTI_TOKEN=${OPENCTI_ADMIN_TOKEN}
- WORKER_LOG_LEVEL=info
depends_on:
- opencti
deploy:
mode: replicated
replicas: 3
restart: always
connector-history:
image: opencti/connector-history:5.2.4
environment:
- OPENCTI_URL=http://opencti:8080
- OPENCTI_TOKEN=${OPENCTI_ADMIN_TOKEN}
- CONNECTOR_ID=3afc67d6-6544-44f4-9d6b-61cac9e6690c
- CONNECTOR_TYPE=STREAM
- CONNECTOR_NAME=History
- CONNECTOR_SCOPE=history
- CONNECTOR_CONFIDENCE_LEVEL=15 # From 0 (Unknown) to 100 (Fully trusted)
- CONNECTOR_LOG_LEVEL=info
restart: always
depends_on:
- opencti
connector-export-file-stix:
image: opencti/connector-export-file-stix:5.2.4
environment:
- OPENCTI_URL=http://opencti:8080
- OPENCTI_TOKEN=${OPENCTI_ADMIN_TOKEN}
- CONNECTOR_ID=45f09e76-2cf7-4d21-a6f9-d4080f93d1ec
- CONNECTOR_TYPE=INTERNAL_EXPORT_FILE
- CONNECTOR_NAME=ExportFileStix2
- CONNECTOR_SCOPE=application/json
- CONNECTOR_CONFIDENCE_LEVEL=15 # From 0 (Unknown) to 100 (Fully trusted)
- CONNECTOR_LOG_LEVEL=info
restart: always
depends_on:
- opencti
connector-export-file-csv:
image: opencti/connector-export-file-csv:5.2.4
environment:
- OPENCTI_URL=http://opencti:8080
- OPENCTI_TOKEN=${OPENCTI_ADMIN_TOKEN}
- CONNECTOR_ID=9c79cebc-5977-4ae2-abbb-cfa37113622d
- CONNECTOR_TYPE=INTERNAL_EXPORT_FILE
- CONNECTOR_NAME=ExportFileCsv
- CONNECTOR_SCOPE=text/csv
- CONNECTOR_CONFIDENCE_LEVEL=15 # From 0 (Unknown) to 100 (Fully trusted)
- CONNECTOR_LOG_LEVEL=info
restart: always
depends_on:
- opencti
connector-export-file-txt:
image: opencti/connector-export-file-txt:5.2.4
environment:
- OPENCTI_URL=http://opencti:8080
- OPENCTI_TOKEN=${OPENCTI_ADMIN_TOKEN}
- CONNECTOR_ID=7796c7bc-3d73-4125-9a9a-8f36a88a5b92
- CONNECTOR_TYPE=INTERNAL_EXPORT_FILE
- CONNECTOR_NAME=ExportFileTxt
- CONNECTOR_SCOPE=text/plain
- CONNECTOR_CONFIDENCE_LEVEL=15 # From 0 (Unknown) to 100 (Fully trusted)
- CONNECTOR_LOG_LEVEL=info
restart: always
depends_on:
- opencti
connector-import-file-stix:
image: opencti/connector-import-file-stix:5.2.4
environment:
- OPENCTI_URL=http://opencti:8080
- OPENCTI_TOKEN=${OPENCTI_ADMIN_TOKEN}
- CONNECTOR_ID=e2fae2c0-6620-4045-a4b7-e9b43f0d30d1
- CONNECTOR_TYPE=INTERNAL_IMPORT_FILE
- CONNECTOR_NAME=ImportFileStix
- CONNECTOR_VALIDATE_BEFORE_IMPORT=true # Validate any bundle before import
- CONNECTOR_SCOPE=application/json,text/xml
- CONNECTOR_AUTO=true # Enable/disable auto-import of file
- CONNECTOR_CONFIDENCE_LEVEL=15 # From 0 (Unknown) to 100 (Fully trusted)
- CONNECTOR_LOG_LEVEL=info
restart: always
depends_on:
- opencti
connector-import-document:
image: opencti/connector-import-document:5.2.4
environment:
- OPENCTI_URL=http://opencti:8080
- OPENCTI_TOKEN=${OPENCTI_ADMIN_TOKEN}
- CONNECTOR_ID=43253472-1031-4dfe-9909-856d40ca92a3
- CONNECTOR_TYPE=INTERNAL_IMPORT_FILE
- CONNECTOR_NAME=ImportDocument
- CONNECTOR_VALIDATE_BEFORE_IMPORT=true # Validate any bundle before import
- CONNECTOR_SCOPE=application/pdf,text/plain,text/html
- CONNECTOR_AUTO=true # Enable/disable auto-import of file
- CONNECTOR_ONLY_CONTEXTUAL=false # Only extract data related to an entity (a report, a threat actor, etc.)
- CONNECTOR_CONFIDENCE_LEVEL=15 # From 0 (Unknown) to 100 (Fully trusted)
- CONNECTOR_LOG_LEVEL=info
- IMPORT_DOCUMENT_CREATE_INDICATOR=true
restart: always
depends_on:
- opencti

volumes:
esdata:
s3data:
redisdata:
amqpdata:

Upgrading from 3.3.2 to 4.1.2

Hi!

Looks like newer version doesn't use Graken anymore. I've upgraded CTI to 4.1.2 and all my data is lost. I believe it's still in Graken but I can't find anywhere how to migrate the data...

SSL (https) needed for external authentication source

Hi,

We have an opencti instance running on http on port 8080.
Now if I want to use opened provider, it requires that the callback URL has SSL (https).

Do I need to setup an nginx proxy that forwards and upgrades port HTTP 8080 to https port 8081 for example? (we cannot use port 443)

Can I let all my connectors still connect to port 8080?

Minio Invalid Credentials Error

I have a strange error where the successful running of openCTI depends on where the directory is cloned.

In the original project folder where I cloned it, I ran the following commands:

git clone https://github.com/OpenCTI-Platform/docker.git
cd docker
cp .env.example .env
docker-compose --compatibility up

And got the following error from minio_1:

ERROR Unable to initialize server switching into safe-mode: Unable to initialize sub-systems: Unable to initialize config system: Invalid credentials

However, when I changed folders where I clone the repo, it runs fine. My hunch is that there must be some environment configuration that's messing up my first clone but I don't know what/where it is.

Docker tag opencti/platform:5.3.1 is not available in docker hub.

docker-compose up -d
[+] Running 0/7
 ⠿ connector-export-file-csv Error                                                                                                                      2.5s
 ⠿ connector-export-file-txt Error                                                                                                                      2.5s
 ⠿ worker Error                                                                                                                                         2.5s
 ⠿ connector-import-document Error                                                                                                                      2.5s
 ⠿ connector-export-file-stix Error                                                                                                                     2.5s
 ⠿ connector-import-file-stix Error                                                                                                                     2.5s
 ⠿ opencti Error                                                                                                                                        2.5s
Error response from daemon: manifest for opencti/platform:5.3.1 not found: manifest unknown: manifest unknown

Support the use of environment file .env

Compose supports declaring default environment variables in an environment file named .env placed in the folder where the docker-compose command is executed (current working directory). Source

So, instead of editing docker-compose.yml, we could just execute:

echo "APP__ADMIN__TOKEN=$(cat /proc/sys/kernel/random/uuid)" > .env
echo "APP__ADMIN__PASSWORD=MySecret" >> .env
..
docker-compose --compatibility up

No need to edit the original docker-compose.yml and no more risk of uploading it to github :)

Another option is to use docker-compose.override.yml, see https://docs.docker.com/compose/extends/#multiple-compose-files

ReplyError: ERR no such key

[ERROR]

ReplyError: ERR no such key

Services

Elastic: 7.12.1
rabbitmq: 3.8-management
minio: RELEASE.2021-04-22T15-44-28Z
redis: 6.2.2
opencti/platform: 4.5.0


Platform: Kubernetes

Issue: Upon initialization the startup of the opencti container the error below is thrown.

I have verified connectivity to redis, minio, rabbitmq, and elastic

Error:

Mon, May 10 2021 11:59:09 am | {"category":"APP","version":"4.5.0","level":"info","message":"[OPENCTI] Starting platform","timestamp":"2021-05-10T16:59:09.331Z"}
Mon, May 10 2021 11:59:09 am | {"category":"APP","version":"4.5.0","level":"info","message":"[CHECK] ElasticSearch is alive","timestamp":"2021-05-10T16:59:09.420Z"}
Mon, May 10 2021 11:59:09 am | {"category":"APP","version":"4.5.0","level":"info","message":"[CHECK] Minio is alive","timestamp":"2021-05-10T16:59:09.450Z"}
Mon, May 10 2021 11:59:09 am | {"category":"APP","version":"4.5.0","level":"info","message":"[CHECK] RabbitMQ is alive","timestamp":"2021-05-10T16:59:09.517Z"}
Mon, May 10 2021 11:59:09 am | {"category":"APP","version":"4.5.0","level":"info","message":"[CHECK] Redis is alive","timestamp":"2021-05-10T16:59:09.518Z"}
Mon, May 10 2021 11:59:10 am | {"category":"APP","version":"4.5.0","level":"info","message":"[CHECK] Python3 is available","timestamp":"2021-05-10T16:59:10.530Z"}
Mon, May 10 2021 11:59:10 am | {"category":"APP","version":"4.5.0","level":"info","message":"[INIT] Existing platform detected, initialization...","timestamp":"2021-05-10T16:59:10.569Z"}
Mon, May 10 2021 11:59:11 am | {"category":"APP","version":"4.5.0","level":"info","message":"[INIT] admin user initialized","timestamp":"2021-05-10T16:59:11.672Z"}
Mon, May 10 2021 11:59:11 am | {"category":"APP","version":"4.5.0","level":"info","message":"[MIGRATION] Read 0 migrations from the database","timestamp":"2021-05-10T16:59:11.715Z"}
Mon, May 10 2021 11:59:11 am | {"category":"APP","version":"4.5.0","level":"info","message":"[MIGRATION] Platform already up to date, nothing to migrate","timestamp":"2021-05-10T16:59:11.717Z"}
Mon, May 10 2021 11:59:11 am | {"category":"APP","version":"4.5.0","level":"info","message":"[MIGRATION] Migration process completed","timestamp":"2021-05-10T16:59:11.719Z"}
Mon, May 10 2021 11:59:14 am | {"category":"APP","version":"4.5.0","level":"info","message":"[STREAM] Starting streaming processor","timestamp":"2021-05-10T16:59:14.212Z"}
Mon, May 10 2021 11:59:14 am | {"category":"APP","version":"4.5.0","level":"info","message":"[OPENCTI] Servers ready on port 8080","timestamp":"2021-05-10T16:59:14.217Z"}
Mon, May 10 2021 11:59:14 am | node:internal/process/promises:245
Mon, May 10 2021 11:59:14 am | triggerUncaughtException(err, true /* fromPromise */);
Mon, May 10 2021 11:59:14 am | ^
Mon, May 10 2021 11:59:14 am |  
Mon, May 10 2021 11:59:14 am | ReplyError: ERR no such key
Mon, May 10 2021 11:59:14 am | at se (/opt/opencti/build/index.js:1:87463)
Mon, May 10 2021 11:59:14 am | at i (/opt/opencti/build/index.js:1:87927)
Mon, May 10 2021 11:59:14 am | at /opt/opencti/build/index.js:1:88227
Mon, May 10 2021 11:59:14 am | at Object.start (/opt/opencti/build/index.js:1:88270)
Mon, May 10 2021 11:59:14 am | at processTicksAndRejections (node:internal/process/task_queues:94:5)
Mon, May 10 2021 11:59:14 am | at async Ft (/opt/opencti/build/index.js:1:232051) {
Mon, May 10 2021 11:59:14 am | command: { name: 'XINFO', args: [ 'STREAM', 'stream.opencti' ] }
Mon, May 10 2021 11:59:14 am | }

image

OPENCTI_BASE_URL missing from README.md

Every time a new setting comes up, .env.sample and README.md needs to be updated, so it requires to update multiple places which is not ideally. That's why the helper command in README.md with jq does not have the new setting.

Problem with connectors (Error 500 : ConnectorsStatusQuery)

Description :
I installed OpenCTI 5.2.4 with Docker. I encounter a problem when I go to the Data > Connectors tab, where it says "An unknown error occurred. Please contact your administrator or the OpenCTI maintainers".
I guess the error is coming from RabbitMQ, but I don't know how to solve this problem.

image

Environment :

  • Docker version : 20.10.15
  • OpenCTI version : 5.2.4
  • RabbitMQ 3.9.17
  • Erlang 24.3.4

Docker-compose file :

version: '3'
services:
  redis:
    image: redis:6.2.6
    restart: unless-stopped
    volumes:
      - redisdata:/data
  elasticsearch:
    image: docker.elastic.co/elasticsearch/elasticsearch:7.17.1
    volumes:
      - esdata:/usr/share/elasticsearch/data
    environment:
      - discovery.type=single-node
      - xpack.ml.enabled=false
    restart: unless-stopped
    ulimits:
      memlock:
        soft: -1
        hard: -1
      nofile:
        soft: 65536
        hard: 65536
  minio:
    image: minio/minio:RELEASE.2022-02-26T02-54-46Z
    volumes:
      - s3data:/data
    ports:
      - "9000:9000"
    environment:
      MINIO_ROOT_USER: ${MINIO_ROOT_USER}
      MINIO_ROOT_PASSWORD: ${MINIO_ROOT_PASSWORD}    
    command: server /data
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:9000/minio/health/live"]
      interval: 30s
      timeout: 20s
      retries: 3
    restart: unless-stopped
  rabbitmq:
    image: rabbitmq:3.9-management
    environment:
      - RABBITMQ_DEFAULT_USER=${RABBITMQ_DEFAULT_USER}
      - RABBITMQ_DEFAULT_PASS=${RABBITMQ_DEFAULT_PASS}
    volumes:
      - amqpdata:/var/lib/rabbitmq
    restart: unless-stopped
    ports:
      - 5672:5672
      - 15672:15672
  opencti:
    image: opencti/platform:5.2.4
    environment:
      - NODE_OPTIONS=--max-old-space-size=8096
      - APP__PORT=5678
      - APP__ADMIN__EMAIL=${OPENCTI_ADMIN_EMAIL}
      - APP__ADMIN__PASSWORD=${OPENCTI_ADMIN_PASSWORD}
      - APP__ADMIN__TOKEN=${OPENCTI_ADMIN_TOKEN}
      - APP__APP_LOGS__LOGS_LEVEL=info
      - REDIS__HOSTNAME=redis
      - REDIS__PORT=6379
      - ELASTICSEARCH__URL=http://elasticsearch:9200
      - MINIO__ENDPOINT=minio
      - MINIO__PORT=9000
      - MINIO__USE_SSL=false
      - MINIO__ACCESS_KEY=${MINIO_ROOT_USER}
      - MINIO__SECRET_KEY=${MINIO_ROOT_PASSWORD}
      - RABBITMQ__HOSTNAME=rabbitmq
      - RABBITMQ__PORT=5672
      - RABBITMQ__PORT_MANAGEMENT=15672
      - RABBITMQ__MANAGEMENT_SSL=false
      - RABBITMQ__USERNAME=${RABBITMQ_DEFAULT_USER}
      - RABBITMQ__PASSWORD=${RABBITMQ_DEFAULT_PASS}
      - SMTP__HOSTNAME=${SMTP_HOSTNAME}
      - SMTP__PORT=25
      - PROVIDERS__LOCAL__STRATEGY=LocalStrategy
    ports:
      - "5678:5678"
    depends_on:
      - redis
      - elasticsearch
      - minio
      - rabbitmq
    restart: unless-stopped
  connector-history:
    image: opencti/connector-history:5.2.4
    environment:
      - OPENCTI_URL=http://opencti:5678
      - OPENCTI_TOKEN=${OPENCTI_ADMIN_TOKEN}
      - CONNECTOR_ID=${CONNECTOR_HISTORY_ID} # Valid UUIDv4
      - CONNECTOR_TYPE=STREAM
      - CONNECTOR_NAME=History
      - CONNECTOR_SCOPE=history
      - CONNECTOR_CONFIDENCE_LEVEL=15 # From 0 (Unknown) to 100 (Fully trusted)
      - CONNECTOR_LOG_LEVEL=info
    restart: unless-stopped
    depends_on:
      - opencti
  connector-export-file-stix:
    image: opencti/connector-export-file-stix:5.2.4
    environment:
      - OPENCTI_URL=http://opencti:5678
      - OPENCTI_TOKEN=${OPENCTI_ADMIN_TOKEN}
      - CONNECTOR_ID=${CONNECTOR_EXPORT_FILE_STIX_ID} # Valid UUIDv4
      - CONNECTOR_TYPE=INTERNAL_EXPORT_FILE
      - CONNECTOR_NAME=ExportFileStix2
      - CONNECTOR_SCOPE=application/json
      - CONNECTOR_CONFIDENCE_LEVEL=15 # From 0 (Unknown) to 100 (Fully trusted)
      - CONNECTOR_LOG_LEVEL=info
    restart: unless-stopped
    depends_on:
      - opencti
  connector-export-file-csv:
    image: opencti/connector-export-file-csv:5.2.4
    environment:
      - OPENCTI_URL=http://opencti:5678
      - OPENCTI_TOKEN=${OPENCTI_ADMIN_TOKEN}
      - CONNECTOR_ID=${CONNECTOR_EXPORT_FILE_CSV_ID} # Valid UUIDv4
      - CONNECTOR_TYPE=INTERNAL_EXPORT_FILE
      - CONNECTOR_NAME=ExportFileCsv
      - CONNECTOR_SCOPE=text/csv
      - CONNECTOR_CONFIDENCE_LEVEL=15 # From 0 (Unknown) to 100 (Fully trusted)
      - CONNECTOR_LOG_LEVEL=info
    restart: unless-stopped
    depends_on:
      - opencti
  connector-export-file-txt:
    image: opencti/connector-export-file-txt:5.2.4
    environment:
      - OPENCTI_URL=http://opencti:5678
      - OPENCTI_TOKEN=${OPENCTI_ADMIN_TOKEN}
      - CONNECTOR_ID=${CONNECTOR_EXPORT_FILE_TXT_ID} # Valid UUIDv4
      - CONNECTOR_TYPE=INTERNAL_EXPORT_FILE
      - CONNECTOR_NAME=ExportFileTxt
      - CONNECTOR_SCOPE=text/plain
      - CONNECTOR_CONFIDENCE_LEVEL=15 # From 0 (Unknown) to 100 (Fully trusted)
      - CONNECTOR_LOG_LEVEL=info
    restart: unless-stopped
    depends_on:
      - opencti
  connector-import-file-stix:
    image: opencti/connector-import-file-stix:5.2.4
    environment:
      - OPENCTI_URL=http://opencti:5678
      - OPENCTI_TOKEN=${OPENCTI_ADMIN_TOKEN}
      - CONNECTOR_ID=${CONNECTOR_IMPORT_FILE_STIX_ID} # Valid UUIDv4
      - CONNECTOR_TYPE=INTERNAL_IMPORT_FILE
      - CONNECTOR_NAME=ImportFileStix
      - CONNECTOR_VALIDATE_BEFORE_IMPORT=true # Validate any bundle before import
      - CONNECTOR_SCOPE=application/json,text/xml
      - CONNECTOR_AUTO=true # Enable/disable auto-import of file
      - CONNECTOR_CONFIDENCE_LEVEL=15 # From 0 (Unknown) to 100 (Fully trusted)
      - CONNECTOR_LOG_LEVEL=info
    restart: unless-stopped
    depends_on:
      - opencti
  connector-import-document:
    image: opencti/connector-import-document:5.2.4
    environment:
      - OPENCTI_URL=http://opencti:5678
      - OPENCTI_TOKEN=${OPENCTI_ADMIN_TOKEN}
      - CONNECTOR_ID=${CONNECTOR_IMPORT_DOCUMENT_ID} # Valid UUIDv4
      - CONNECTOR_TYPE=INTERNAL_IMPORT_FILE
      - CONNECTOR_NAME=ImportDocument
      - CONNECTOR_VALIDATE_BEFORE_IMPORT=true # Validate any bundle before import
      - CONNECTOR_SCOPE=application/pdf,text/plain,text/html
      - CONNECTOR_AUTO=true # Enable/disable auto-import of file
      - CONNECTOR_ONLY_CONTEXTUAL=false # Only extract data related to an entity (a report, a threat actor, etc.)
      - CONNECTOR_CONFIDENCE_LEVEL=15 # From 0 (Unknown) to 100 (Fully trusted)
      - CONNECTOR_LOG_LEVEL=info
      - IMPORT_DOCUMENT_CREATE_INDICATOR=true
    restart: unless-stopped
    depends_on:
      - opencti

volumes:
  esdata:
  s3data:
  redisdata:
  amqpdata:

OpenCTI Logs :

{"category":"APP","level":"info","message":"[OPENCTI] Starting platform","timestamp":"2022-05-11T11:41:28.642Z","version":"5.2.4"}
{"category":"APP","level":"info","message":"[SEARCH ENGINE] Elasticsearch (7.17.1) client selected / runtime sorting enabled","timestamp":"2022-05-11T11:41:28.731Z","version":"5.2.4"}
{"category":"APP","level":"info","message":"[CHECK] Search engine is alive","timestamp":"2022-05-11T11:41:28.731Z","version":"5.2.4"}
{"category":"APP","level":"info","message":"[CHECK] Minio is alive","timestamp":"2022-05-11T11:41:28.748Z","version":"5.2.4"}
{"category":"APP","level":"info","message":"[CHECK] RabbitMQ is alive","timestamp":"2022-05-11T11:41:28.769Z","version":"5.2.4"}
{"category":"APP","level":"info","message":"[REDIS] Redis 'Client base' client ready","timestamp":"2022-05-11T11:41:28.778Z","version":"5.2.4"}
{"category":"APP","level":"info","message":"[CHECK] Redis is alive","timestamp":"2022-05-11T11:41:28.780Z","version":"5.2.4"}
{"category":"APP","level":"info","message":"[CHECK] Python3 is available","timestamp":"2022-05-11T11:41:29.124Z","version":"5.2.4"}
{"category":"APP","level":"info","message":"[REDIS] Redis 'Client context' client ready","timestamp":"2022-05-11T11:41:29.127Z","version":"5.2.4"}
{"category":"APP","level":"info","message":"[INIT] Starting platform initialization","timestamp":"2022-05-11T11:41:29.129Z","version":"5.2.4"}
{"category":"APP","level":"info","message":"[INIT] Existing platform detected, initialization...","timestamp":"2022-05-11T11:41:29.270Z","version":"5.2.4"}
{"category":"APP","level":"info","message":"[INIT] admin user initialized","timestamp":"2022-05-11T11:41:29.581Z","version":"5.2.4"}
{"category":"APP","level":"info","message":"[MIGRATION] Read 0 migrations from the database","timestamp":"2022-05-11T11:41:29.643Z","version":"5.2.4"}
{"category":"APP","level":"info","message":"[MIGRATION] Platform already up to date, nothing to migrate","timestamp":"2022-05-11T11:41:29.643Z","version":"5.2.4"}
{"category":"APP","level":"info","message":"[MIGRATION] Migration process completed","timestamp":"2022-05-11T11:41:29.644Z","version":"5.2.4"}
{"category":"APP","level":"info","message":"[MIGRATION] Platform version updated to 5.2.4","timestamp":"2022-05-11T11:41:29.686Z","version":"5.2.4"}
{"category":"APP","level":"info","message":"[INIT] Platform initialization done","timestamp":"2022-05-11T11:41:29.686Z","version":"5.2.4"}
{"category":"APP","level":"info","message":"[OPENCTI] API ready on port 5678","timestamp":"2022-05-11T11:41:30.397Z","version":"5.2.4"}
{"category":"APP","level":"info","message":"[OPENCTI-MODULE] Running Expiration manager","timestamp":"2022-05-11T11:41:30.397Z","version":"5.2.4"}
{"category":"APP","level":"info","message":"[OPENCTI-MODULE] Running retention manager","timestamp":"2022-05-11T11:41:30.397Z","version":"5.2.4"}
{"category":"APP","level":"info","message":"[OPENCTI-MODULE] Running task manager","timestamp":"2022-05-11T11:41:30.398Z","version":"5.2.4"}
{"category":"APP","level":"info","message":"[OPENCTI-MODULE] Subscription manager not started (disabled by configuration)","timestamp":"2022-05-11T11:41:30.418Z","version":"5.2.4"}
{"auth":{"email":"[email protected]","ip":"172.18.0.2","user_id":"88ec0c6a-13ce-5e39-b486-354fe4a7084f"},"category":"AUDIT","level":"info","message":"LOGIN","resource":{"provider":"Bearer"},"timestamp":"2022-05-11T11:41:30.688Z","version":"5.2.4"}
{"auth":{"email":"[email protected]","ip":"172.18.0.8","user_id":"88ec0c6a-13ce-5e39-b486-354fe4a7084f"},"category":"AUDIT","level":"info","message":"LOGIN","resource":{"provider":"Bearer"},"timestamp":"2022-05-11T11:41:32.148Z","version":"5.2.4"}
{"auth":{"email":"[email protected]","ip":"172.18.0.11","user_id":"88ec0c6a-13ce-5e39-b486-354fe4a7084f"},"category":"AUDIT","level":"info","message":"LOGIN","resource":{"provider":"Bearer"},"timestamp":"2022-05-11T11:41:32.491Z","version":"5.2.4"}
{"auth":{"email":"[email protected]","ip":"172.18.0.12","user_id":"88ec0c6a-13ce-5e39-b486-354fe4a7084f"},"category":"AUDIT","level":"info","message":"LOGIN","resource":{"provider":"Bearer"},"timestamp":"2022-05-11T11:41:33.465Z","version":"5.2.4"}
{"auth":{"email":"[email protected]","ip":"172.18.0.12","user_id":"88ec0c6a-13ce-5e39-b486-354fe4a7084f"},"category":"AUDIT","level":"info","message":"LOGIN","resource":{"provider":"Bearer"},"timestamp":"2022-05-11T11:41:33.703Z","version":"5.2.4"}
{"category":"APP","level":"info","message":"[STREAM] Starting stream processor for [email protected]","timestamp":"2022-05-11T11:41:33.707Z","version":"5.2.4"}
{"category":"APP","level":"info","message":"[REDIS] Redis '[email protected]' client ready","timestamp":"2022-05-11T11:41:33.709Z","version":"5.2.4"}
{"auth":{"email":"[email protected]","ip":"172.18.0.6","user_id":"88ec0c6a-13ce-5e39-b486-354fe4a7084f"},"category":"AUDIT","level":"info","message":"LOGIN","resource":{"provider":"Bearer"},"timestamp":"2022-05-11T11:41:34.574Z","version":"5.2.4"}
{"category":"APP","level":"info","message":"[OPENCTI-MODULE] Running rule manager","timestamp":"2022-05-11T11:41:40.419Z","version":"5.2.4"}
{"category":"APP","level":"info","message":"[STREAM] Starting stream processor for Rule manager","timestamp":"2022-05-11T11:41:40.430Z","version":"5.2.4"}
{"category":"APP","level":"info","message":"[REDIS] Redis 'Rule manager' client ready","timestamp":"2022-05-11T11:41:40.432Z","version":"5.2.4"}
{"auth":{"email":"[email protected]","ip":"172.18.0.7","user_id":"88ec0c6a-13ce-5e39-b486-354fe4a7084f"},"category":"AUDIT","level":"info","message":"LOGIN","resource":{"provider":"Bearer"},"timestamp":"2022-05-11T11:41:40.632Z","version":"5.2.4"}
{"category":"APP","error":{"stacktrace":["Error: Request failed with status code 500","at createError (/opt/opencti/build/node_modules/axios/lib/core/createError.js:16:15)","at settle (/opt/opencti/build/node_modules/axios/lib/core/settle.js:17:12)","at IncomingMessage.handleStreamEnd (/opt/opencti/build/node_modules/axios/lib/adapters/http.js:312:11)","at IncomingMessage.emit (node:events:406:35)","at endReadableNT (node:internal/streams/readable:1331:12)","at processTicksAndRejections (node:internal/process/task_queues:83:21)"]},"inner_relation_creation":0,"level":"error","message":"API Call","operation":"ConnectorsStatusQuery","operation_query":"query ConnectorsStatusQuery{...ConnectorsStatus_data}fragment ConnectorsStatus_data on Query{connectors{id name active auto connector_type connector_scope updated_at config{listen listen_exchange push push_exchange}}rabbitMQMetrics{queues{name messages messages_ready messages_unacknowledged consumers idle_since message_stats{ack ack_details{rate}}}}}","size":2,"time":15,"timestamp":"2022-05-11T11:43:11.660Z","type":"READ_ERROR","user":{"ip":"172.18.0.1","referer":"http://localhost:5678/dashboard/data/connectors","user_id":"88ec0c6a-13ce-5e39-b486-354fe4a7084f"},"variables":{},"version":"5.2.4"}
{"category":"APP","error":{"stacktrace":["Error: Request failed with status code 500","at createError (/opt/opencti/build/node_modules/axios/lib/core/createError.js:16:15)","at settle (/opt/opencti/build/node_modules/axios/lib/core/settle.js:17:12)","at IncomingMessage.handleStreamEnd (/opt/opencti/build/node_modules/axios/lib/adapters/http.js:312:11)","at IncomingMessage.emit (node:events:406:35)","at endReadableNT (node:internal/streams/readable:1331:12)","at processTicksAndRejections (node:internal/process/task_queues:83:21)"]},"inner_relation_creation":0,"level":"error","message":"API Call","operation":"WorkersStatusQuery","operation_query":"query WorkersStatusQuery{...WorkersStatus_data}fragment WorkersStatus_data on Query{elasticSearchMetrics{docs{count}search{query_total fetch_total}indexing{index_total delete_total}get{total}}rabbitMQMetrics{consumers overview{queue_totals{messages messages_ready messages_unacknowledged}message_stats{ack ack_details{rate}}}}}","size":2,"time":25,"timestamp":"2022-05-11T11:43:11.665Z","type":"READ_ERROR","user":{"ip":"172.18.0.1","referer":"http://localhost:5678/dashboard/data/connectors","user_id":"88ec0c6a-13ce-5e39-b486-354fe4a7084f"},"variables":{},"version":"5.2.4"}

Status of RabbitMQ :
image
So the connectors containers seem to be well connected to RabbitMQ, only the Data > Connectors query from OpenCTI generates an error.

Any help would be much appreciated.

Export of reports in CSV format has the wrong extension

Description

Export of reports, in CSV format, from the GUI has the wrong extension. The file has the extension ".false" while it should be ".csv"

Environment

  • Users' OS: Mac OS 10
  • OpenCTIv4

Reproducible Steps

Steps to create the smallest reproducible scenario:

  • Go to the Analysys -> Reports section.
  • Narrow down the number of reports by filtering. For example, chose only MISP events.
  • Click on the EXPORT and chose application/csv
  • When the export is ready, on the right column of the page download the report.
  • The extension of the report is ".false"

Expected Output

The report should have the extension ".csv"

Actual Output

The report has the extenstion ".false"

system hangs

Whenever i run docker it exists within 8 sec ..
Help to resolve it

OpenCTI on docker keeps failing to start

OS: Ubuntu 20.04
OpenCTI: 4.5.4

This is a new setup on Hyper-Visor/pfSense environment.
Note: Performed the same install procedures when install in docker - both works on proxmox and citrix but failed on hyper-visor/pfsense

image

image

Inquiry: Upgrading OpenCTI and cleaning-up old version

Hi,

This is just an inquiry.
Please advise of what is the best practice when updating OpenCTI docker. Is it safe to remove all docker images & containers when upgrading to a new version of opencti but want to keep all existing data feeds?

Running two OpenCTI Instances not Working

Description

I cannot run two OpenCTI instances in parallel on the same VM. ElasticSearch is constantly restarting.

Environment

  1. OS (where OpenCTI server runs): Ubuntu 18.04.5 LTS
  2. OpenCTI version: 4.5.4
  3. Other environmental details: nginx

Setup

  1. Create two OpenCTI instances in two separate folders that are identical except for port configuration (80:80 vs 8080:80 in docker-compose config of nginx)
  2. Verify that both can run separately and are accessible on their respective port
  3. Let both run in parallel

The Docker containers do not have the same names across the two instances. Docker-compose creates two separate docker networks, one per instance. Volumes mapped to the host are mapped to two different locations. Per my understanding, it should be possible to run two OpenCTI instances in parallel with this setup.

Expected Output

Two OpenCTI instances running in parallel, one on port 80 and one on port 8080.

Actual Output

It is not working.

In the OpenCTI instance that is started second:
All connector and the OpenCTI worker docker containers are constantly restarting, as they are waiting for the OpenCTI API to be reachable, which is not reachable because The OpenCTI platform docker container is also constantly restarting. It in turn is waiting for ElasticSearch to be up and also constantly restarting.

In the OpenCTI instance that is started first:
All containers continue running, except ElasticSearch.

In both instances, the ElasticSearch container is restarting every 30 seconds. Hence I assume that the error has to do with the ElasticSearch containers.

Additional information

I am not a docker-compose expert, it is entirely possible that I missed something.

Log Elastic Search first instance, after first restart:

{"type": "server", "timestamp": "2021-06-15T12:39:51,614Z", "level": "INFO", "component": "o.e.n.Node", "cluster.name": "docker-cluster", "node.name": "77134d5676d3", "message": "version[7.12.1], pid[8], build[default/docker/3186837139b9c6b6d23c3200870651f10d3343b7/2021-04-20T20:56:39.040728659Z], OS[Linux/5.3.0-62-generic/amd64], JVM[AdoptOpenJDK/OpenJDK 64-Bit Server VM/16/16+36]" }
{"type": "server", "timestamp": "2021-06-15T12:39:51,616Z", "level": "INFO", "component": "o.e.n.Node", "cluster.name": "docker-cluster", "node.name": "77134d5676d3", "message": "JVM home [/usr/share/elasticsearch/jdk], using bundled JDK [true]" }
{"type": "server", "timestamp": "2021-06-15T12:39:51,617Z", "level": "INFO", "component": "o.e.n.Node", "cluster.name": "docker-cluster", "node.name": "77134d5676d3", "message": "JVM arguments [-Xshare:auto, -Des.networkaddress.cache.ttl=60, -Des.networkaddress.cache.negative.ttl=10, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -XX:-OmitStackTraceInFastThrow, -XX:+ShowCodeDetailsInExceptionMessages, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dio.netty.allocator.numDirectArenas=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Djava.locale.providers=SPI,COMPAT, --add-opens=java.base/java.io=ALL-UNNAMED, -XX:+UseG1GC, -Djava.io.tmpdir=/tmp/elasticsearch-358196448070951726, -XX:+HeapDumpOnOutOfMemoryError, -XX:HeapDumpPath=data, -XX:ErrorFile=logs/hs_err_pid%p.log, -Xlog:gc*,gc+age=trace,safepoint:file=logs/gc.log:utctime,pid,tags:filecount=32,filesize=64m, -Des.cgroups.hierarchy.override=/, -Xms31744m, -Xmx31744m, -XX:MaxDirectMemorySize=16642998272, -XX:InitiatingHeapOccupancyPercent=30, -XX:G1ReservePercent=25, -Des.path.home=/usr/share/elasticsearch, -Des.path.conf=/usr/share/elasticsearch/config, -Des.distribution.flavor=default, -Des.distribution.type=docker, -Des.bundled_jdk=true]" }

Log Elastic Search second instance (from a separate try, that is why the time stamps are not similar to the ones above):

{"type": "server", "timestamp": "2021-06-15T12:35:55,967Z", "level": "INFO", "component": "o.e.n.Node", "cluster.name": "docker-cluster", "node.name": "ce2909395a10", "message": "version[7.12.1], pid[8], build[default/docker/3186837139b9c6b6d23c3200870651f10d3343b7/2021-04-20T20:56:39.040728659Z], OS[Linux/5.3.0-62-generic/amd64], JVM[AdoptOpenJDK/OpenJDK 64-Bit Server VM/16/16+36]" }
{"type": "server", "timestamp": "2021-06-15T12:35:55,970Z", "level": "INFO", "component": "o.e.n.Node", "cluster.name": "docker-cluster", "node.name": "ce2909395a10", "message": "JVM home [/usr/share/elasticsearch/jdk], using bundled JDK [true]" }
{"type": "server", "timestamp": "2021-06-15T12:35:55,970Z", "level": "INFO", "component": "o.e.n.Node", "cluster.name": "docker-cluster", "node.name": "ce2909395a10", "message": "JVM arguments [-Xshare:auto, -Des.networkaddress.cache.ttl=60, -Des.networkaddress.cache.negative.ttl=10, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -XX:-OmitStackTraceInFastThrow, -XX:+ShowCodeDetailsInExceptionMessages, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dio.netty.allocator.numDirectArenas=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Djava.locale.providers=SPI,COMPAT, --add-opens=java.base/java.io=ALL-UNNAMED, -XX:+UseG1GC, -Djava.io.tmpdir=/tmp/elasticsearch-704385226463528375, -XX:+HeapDumpOnOutOfMemoryError, -XX:HeapDumpPath=data, -XX:ErrorFile=logs/hs_err_pid%p.log, -Xlog:gc*,gc+age=trace,safepoint:file=logs/gc.log:utctime,pid,tags:filecount=32,filesize=64m, -Des.cgroups.hierarchy.override=/, -Xms31744m, -Xmx31744m, -XX:MaxDirectMemorySize=16642998272, -XX:InitiatingHeapOccupancyPercent=30, -XX:G1ReservePercent=25, -Des.path.home=/usr/share/elasticsearch, -Des.path.conf=/usr/share/elasticsearch/config, -Des.distribution.flavor=default, -Des.distribution.type=docker, -Des.bundled_jdk=true]" }

Configuration Port 80

core
version: "3"
services:
  opencti-redis:
    image: redis:6.2.3
    container_name: openCTI-redis
    volumes:
      - /data/master/openCTI/redis:/data
    restart: always

  opencti-es:
    image: docker.elastic.co/elasticsearch/elasticsearch:7.12.1
    container_name: openCTI-es
    volumes:
      - /data/master/openCTI/es:/usr/share/elasticsearch/data
    environment:
      - discovery.type=single-node
      - xpack.ml.enabled=false
    ulimits:
      memlock:
        soft: -1
        hard: -1
      nofile:
        soft: 65536
        hard: 65536
    restart: always

  opencti-minio:
    image: minio/minio:RELEASE.2021-04-22T15-44-28Z
    container_name: openCTI-minio
    volumes:
      - /data/master/openCTI/minio:/data
    environment:
      MINIO_ACCESS_KEY: ${MINIO_ACCESS_KEY}
      MINIO_SECRET_KEY: ${MINIO_SECRET_KEY}
    command: server /data
    restart: always

  opencti-rabbitmq:
    image: rabbitmq:3.8-management
    container_name: openCTI-rabbitmq
    environment:
      - RABBITMQ_DEFAULT_USER=${RABBITMQ_DEFAULT_USER}
      - RABBITMQ_DEFAULT_PASS=${RABBITMQ_DEFAULT_PASS}
    volumes:
      - /data/master/openCTI/rabbitmq:/var/lib/rabbitmq
    restart: always

  opencti:
    image: opencti/platform:4.5.4
    container_name: openCTI
    environment:
      - NODE_OPTIONS=--max-old-space-size=8096
      - APP__PORT=${OPENCTI_PORT}
      - APP__ADMIN__EMAIL=${OPENCTI_ADMIN_EMAIL}
      - APP__ADMIN__PASSWORD=${OPENCTI_ADMIN_PASSWORD}
      - APP__ADMIN__TOKEN=${OPENCTI_ADMIN_TOKEN}
      - APP__LOGS_LEVEL=error
      - APP__LOGS=./logs
      - APP__REACTIVE=true
      - APP__COOKIE_SECURE=false

      - REDIS__HOSTNAME=opencti-redis
      - REDIS__PORT=6379

      - ELASTICSEARCH__URL=http://opencti-es:9200

      - MINIO__ENDPOINT=opencti-minio
      - MINIO__PORT=9000
      - MINIO__USE_SSL=false
      - MINIO__ACCESS_KEY=${MINIO_ACCESS_KEY}
      - MINIO__SECRET_KEY=${MINIO_SECRET_KEY}

      - RABBITMQ__HOSTNAME=opencti-rabbitmq
      - RABBITMQ__PORT=5672
      - RABBITMQ__PORT_MANAGEMENT=15672
      - RABBITMQ__MANAGEMENT_SSL=false
      - RABBITMQ__USERNAME=${RABBITMQ_DEFAULT_USER}
      - RABBITMQ__PASSWORD=${RABBITMQ_DEFAULT_PASS}

      - PROVIDERS__LOCAL__STRATEGY=LocalStrategy
    depends_on:
      - opencti-redis
      - opencti-es
      - opencti-minio
      - opencti-rabbitmq
    restart: always

  opencti-worker:
    image: opencti/worker:4.5.4
    container_name: openCTI-worker
    environment:
      - OPENCTI_URL=http://opencti:${OPENCTI_PORT}
      - OPENCTI_TOKEN=${OPENCTI_ADMIN_TOKEN}
      - WORKER_LOG_LEVEL=info
    depends_on:
      - opencti
    deploy:
      mode: replicated
      replicas: 10
    restart: always
connectors
version: "3"
services:
# import/export
  connector-export-file-stix:
    image: opencti/connector-export-file-stix:4.5.4
    container_name: connector-export-file-stix
    environment:
      - OPENCTI_URL=http://opencti:${OPENCTI_PORT}
      - OPENCTI_TOKEN=${OPENCTI_ADMIN_TOKEN}
      - CONNECTOR_ID=${CONNECTOR_EXPORT_FILE_STIX_ID}
      - CONNECTOR_TYPE=INTERNAL_EXPORT_FILE
      - CONNECTOR_NAME=ExportFileStix2
      - CONNECTOR_SCOPE=application/json
      - CONNECTOR_CONFIDENCE_LEVEL=100
      - CONNECTOR_LOG_LEVEL=info
    restart: always

  connector-export-file-csv:
    image: opencti/connector-export-file-csv:4.5.4
    container_name: connector-export-file-csv
    environment:
      - OPENCTI_URL=http://opencti:${OPENCTI_PORT}
      - OPENCTI_TOKEN=${OPENCTI_ADMIN_TOKEN}
      - CONNECTOR_ID=${CONNECTOR_EXPORT_FILE_CSV_ID}
      - CONNECTOR_TYPE=INTERNAL_EXPORT_FILE
      - CONNECTOR_NAME=ExportFileCsv
      - CONNECTOR_SCOPE=text/csv
      - CONNECTOR_CONFIDENCE_LEVEL=100
      - CONNECTOR_LOG_LEVEL=info
    restart: always

  connector-import-file-stix:
    image: opencti/connector-import-file-stix:4.5.4
    container_name: connector-image-file-stix
    environment:
      - OPENCTI_URL=http://opencti:${OPENCTI_PORT}
      - OPENCTI_TOKEN=${OPENCTI_ADMIN_TOKEN}
      - CONNECTOR_ID=${CONNECTOR_IMPORT_FILE_STIX_ID}
      - CONNECTOR_TYPE=INTERNAL_IMPORT_FILE
      - CONNECTOR_NAME=ImportFileStix
      - CONNECTOR_SCOPE=application/json,text/xml
      - CONNECTOR_AUTO=false
      - CONNECTOR_CONFIDENCE_LEVEL=15
      - CONNECTOR_LOG_LEVEL=info
    restart: always

  connector-import-report:
    image: opencti/connector-import-report:4.5.4
    container_name: connector-import-report
    environment:
      - OPENCTI_URL=http://opencti:${OPENCTI_PORT}
      - OPENCTI_TOKEN=${OPENCTI_ADMIN_TOKEN}
      - CONNECTOR_ID=${CONNECTOR_IMPORT_REPORT_ID}
      - CONNECTOR_TYPE=INTERNAL_IMPORT_FILE
      - CONNECTOR_NAME=ImportReport
      - CONNCETOR_AUTO=false # Enable/disable auto-import of file
      - CONNCETOR_ONLY_CONTEXTUAL=true # Only extract data related to an entity (a report, a threat actor, etc.)
      - CONNECTOR_SCOPE=application/pdf,text/plain
      - CONNECTOR_CONFIDENCE_LEVEL=15
      - CONNECTOR_LOG_LEVEL=info
      - IMPORT_REPORT_CREATE_INDICATOR=false
    restart: always

  connector-history:
      image: opencti/connector-history:4.5.4
      container_name: connector-history
      environment:
        - OPENCTI_URL=http://opencti:${OPENCTI_PORT}
        - OPENCTI_TOKEN=${OPENCTI_ADMIN_TOKEN}
        - CONNECTOR_ID=${CONNECTOR_HISTORY_ID}
        - CONNECTOR_TYPE=STREAM
        - CONNECTOR_NAME=History
        - CONNECTOR_SCOPE=history
        - CONNECTOR_CONFIDENCE_LEVEL=15
        - CONNECTOR_LOG_LEVEL=info
      restart: always

# basic definitions/data
  connector-opencti:
        image: opencti/connector-opencti:4.5.4
        container_name: connector-openCTI
        environment:
          - OPENCTI_URL=http://opencti:${OPENCTI_PORT}
          - OPENCTI_TOKEN=${OPENCTI_ADMIN_TOKEN}
          - CONNECTOR_ID=${CONNECTOR_OPENCTI_ID}
          - CONNECTOR_TYPE=EXTERNAL_IMPORT
          - CONNECTOR_NAME=OpenCTI
          - CONNECTOR_SCOPE=marking-definition,identity,location
          - CONNECTOR_CONFIDENCE_LEVEL=100
          - CONNECTOR_UPDATE_EXISTING_DATA=true
          - CONNECTOR_LOG_LEVEL=info
          - CONFIG_SECTORS_FILE_URL=https://raw.githubusercontent.com/OpenCTI-Platform/datasets/master/data/sectors.json
          - CONFIG_GEOGRAPHY_FILE_URL=https://raw.githubusercontent.com/OpenCTI-Platform/datasets/master/data/geography.json
          - CONFIG_INTERVAL=7
        restart: always
    
  connector-mitre:
    image: opencti/connector-mitre:4.5.4
    container_name: connector-mitre
    environment:
      - OPENCTI_URL=http://opencti:${OPENCTI_PORT}
      - OPENCTI_TOKEN=${OPENCTI_ADMIN_TOKEN}
      - CONNECTOR_ID=${CONNECTOR_MITRE_ID}
      - CONNECTOR_TYPE=EXTERNAL_IMPORT
      - CONNECTOR_NAME=MITRE ATT&CK
      - CONNECTOR_SCOPE=marking-definition,identity,attack-pattern,course-of-action,intrusion-set,campaign,malware,tool,report,external-reference-as-report
      - CONNECTOR_CONFIDENCE_LEVEL=15
      - CONNECTOR_UPDATE_EXISTING_DATA=true
      - CONNECTOR_LOG_LEVEL=info
      - MITRE_ENTERPRISE_FILE_URL=https://raw.githubusercontent.com/mitre/cti/master/enterprise-attack/enterprise-attack.json
      - MITRE_PRE_ATTACK_FILE_URL=https://raw.githubusercontent.com/mitre/cti/master/pre-attack/pre-attack.json
      - MITRE_MOBILE_ATTACK_FILE_URL=https://raw.githubusercontent.com/mitre/cti/master/mobile-attack/mobile-attack.json
      - MITRE_INTERVAL=7
    restart: always

  connector-cve:
      image: opencti/connector-cve:4.5.4
      container_name: connector-cve
      environment:
        - OPENCTI_URL=http://opencti:${OPENCTI_PORT}
        - OPENCTI_TOKEN=${OPENCTI_ADMIN_TOKEN}
        - CONNECTOR_ID=${CONNECTOR_CVE_ID}
        - CONNECTOR_TYPE=EXTERNAL_IMPORT
        - CONNECTOR_NAME=Common Vulnerabilities and Exposures
        - CONNECTOR_SCOPE=identity,vulnerability
        - CONNECTOR_CONFIDENCE_LEVEL=75
        - CONNECTOR_UPDATE_EXISTING_DATA=true
        - CONNECTOR_LOG_LEVEL=info
        - CVE_IMPORT_HISTORY=true
        - CVE_NVD_DATA_FEED=https://nvd.nist.gov/feeds/json/cve/1.1/nvdcve-1.1-recent.json.gz
        - CVE_HISTORY_DATA_FEED=https://nvd.nist.gov/feeds/json/cve/1.1/
        - CVE_INTERVAL=2 # in days, must be strictly greater than 1
      restart: always
nginx
version: "3"
services:
  web:
    image: nginx
    container_name: nginx
    volumes:
      - ./nginx:/etc/nginx/conf.d
    ports:
      - 80:80
    depends_on:
      - opencti
    restart: always

Configuration Port 8080

This config is basically the same as for the port 80 instance.

core
version: "3"
services:
  opencti-redis:
    image: redis:6.2.3
    container_name: openCTI-redis-dev
    volumes:
      - /data/master/openCTI-dev/redis:/data
    restart: always

  opencti-es:
    image: docker.elastic.co/elasticsearch/elasticsearch:7.12.1
    container_name: openCTI-es-dev
    volumes:
      - /data/master/openCTI-dev/es:/usr/share/elasticsearch/data
    environment:
      - discovery.type=single-node
      - xpack.ml.enabled=false
    ulimits:
      memlock:
        soft: -1
        hard: -1
      nofile:
        soft: 65536
        hard: 65536
    restart: always

  opencti-minio:
    image: minio/minio:RELEASE.2021-04-22T15-44-28Z
    container_name: openCTI-minio-dev
    volumes:
      - /data/master/openCTI-dev/minio:/data
    environment:
      MINIO_ACCESS_KEY: ${MINIO_ACCESS_KEY}
      MINIO_SECRET_KEY: ${MINIO_SECRET_KEY}
    command: server /data
    restart: always

  opencti-rabbitmq:
    image: rabbitmq:3.8-management
    container_name: openCTI-rabbitmq-dev
    environment:
      - RABBITMQ_DEFAULT_USER=${RABBITMQ_DEFAULT_USER}
      - RABBITMQ_DEFAULT_PASS=${RABBITMQ_DEFAULT_PASS}
    volumes:
      - /data/master/openCTI-dev/rabbitmq:/var/lib/rabbitmq
    restart: always

  opencti:
    image: opencti/platform:4.5.4
    container_name: openCTI-dev
    environment:
      - NODE_OPTIONS=--max-old-space-size=8096
      - APP__PORT=${OPENCTI_PORT}
      - APP__ADMIN__EMAIL=${OPENCTI_ADMIN_EMAIL}
      - APP__ADMIN__PASSWORD=${OPENCTI_ADMIN_PASSWORD}
      - APP__ADMIN__TOKEN=${OPENCTI_ADMIN_TOKEN}
      - APP__LOGS_LEVEL=error
      - APP__LOGS=./logs
      - APP__REACTIVE=true
      - APP__COOKIE_SECURE=false

      - REDIS__HOSTNAME=opencti-redis
      - REDIS__PORT=6379

      - ELASTICSEARCH__URL=http://opencti-es:9200

      - MINIO__ENDPOINT=opencti-minio
      - MINIO__PORT=9000
      - MINIO__USE_SSL=false
      - MINIO__ACCESS_KEY=${MINIO_ACCESS_KEY}
      - MINIO__SECRET_KEY=${MINIO_SECRET_KEY}

      - RABBITMQ__HOSTNAME=opencti-rabbitmq
      - RABBITMQ__PORT=5672
      - RABBITMQ__PORT_MANAGEMENT=15672
      - RABBITMQ__MANAGEMENT_SSL=false
      - RABBITMQ__USERNAME=${RABBITMQ_DEFAULT_USER}
      - RABBITMQ__PASSWORD=${RABBITMQ_DEFAULT_PASS}

      - PROVIDERS__LOCAL__STRATEGY=LocalStrategy
    depends_on:
      - opencti-redis
      - opencti-es
      - opencti-minio
      - opencti-rabbitmq
    restart: always

  opencti-worker:
    image: opencti/worker:4.5.4
    container_name: openCTI-worker-dev
    environment:
      - OPENCTI_URL=http://opencti:${OPENCTI_PORT}
      - OPENCTI_TOKEN=${OPENCTI_ADMIN_TOKEN}
      - WORKER_LOG_LEVEL=info
    depends_on:
      - opencti
    deploy:
      mode: replicated
      replicas: 10
    restart: always
connectors
version: "3"
services:
# import/export
  connector-export-file-stix:
    image: opencti/connector-export-file-stix:4.5.4
    container_name: connector-export-file-stix-dev
    environment:
      - OPENCTI_URL=http://opencti:${OPENCTI_PORT}
      - OPENCTI_TOKEN=${OPENCTI_ADMIN_TOKEN}
      - CONNECTOR_ID=${CONNECTOR_EXPORT_FILE_STIX_ID}
      - CONNECTOR_TYPE=INTERNAL_EXPORT_FILE
      - CONNECTOR_NAME=ExportFileStix2
      - CONNECTOR_SCOPE=application/json
      - CONNECTOR_CONFIDENCE_LEVEL=100
      - CONNECTOR_LOG_LEVEL=info
    restart: always

  connector-export-file-csv:
    image: opencti/connector-export-file-csv:4.5.4
    container_name: connector-export-file-csv-dev
    environment:
      - OPENCTI_URL=http://opencti:${OPENCTI_PORT}
      - OPENCTI_TOKEN=${OPENCTI_ADMIN_TOKEN}
      - CONNECTOR_ID=${CONNECTOR_EXPORT_FILE_CSV_ID}
      - CONNECTOR_TYPE=INTERNAL_EXPORT_FILE
      - CONNECTOR_NAME=ExportFileCsv
      - CONNECTOR_SCOPE=text/csv
      - CONNECTOR_CONFIDENCE_LEVEL=100
      - CONNECTOR_LOG_LEVEL=info
    restart: always

  connector-import-file-stix:
    image: opencti/connector-import-file-stix:4.5.4
    container_name: connector-image-file-stix-dev
    environment:
      - OPENCTI_URL=http://opencti:${OPENCTI_PORT}
      - OPENCTI_TOKEN=${OPENCTI_ADMIN_TOKEN}
      - CONNECTOR_ID=${CONNECTOR_IMPORT_FILE_STIX_ID}
      - CONNECTOR_TYPE=INTERNAL_IMPORT_FILE
      - CONNECTOR_NAME=ImportFileStix
      - CONNECTOR_SCOPE=application/json,text/xml
      - CONNECTOR_AUTO=false
      - CONNECTOR_CONFIDENCE_LEVEL=15
      - CONNECTOR_LOG_LEVEL=info
    restart: always

  connector-import-report:
    image: opencti/connector-import-report:4.5.4
    container_name: connector-import-report-dev
    environment:
      - OPENCTI_URL=http://opencti:${OPENCTI_PORT}
      - OPENCTI_TOKEN=${OPENCTI_ADMIN_TOKEN}
      - CONNECTOR_ID=${CONNECTOR_IMPORT_REPORT_ID}
      - CONNECTOR_TYPE=INTERNAL_IMPORT_FILE
      - CONNECTOR_NAME=ImportReport
      - CONNCETOR_AUTO=false # Enable/disable auto-import of file
      - CONNCETOR_ONLY_CONTEXTUAL=true # Only extract data related to an entity (a report, a threat actor, etc.)
      - CONNECTOR_SCOPE=application/pdf,text/plain
      - CONNECTOR_CONFIDENCE_LEVEL=15
      - CONNECTOR_LOG_LEVEL=info
      - IMPORT_REPORT_CREATE_INDICATOR=false
    restart: always

  connector-history:
      image: opencti/connector-history:4.5.4
      container_name: connector-history-dev
      environment:
        - OPENCTI_URL=http://opencti:${OPENCTI_PORT}
        - OPENCTI_TOKEN=${OPENCTI_ADMIN_TOKEN}
        - CONNECTOR_ID=${CONNECTOR_HISTORY_ID}
        - CONNECTOR_TYPE=STREAM
        - CONNECTOR_NAME=History
        - CONNECTOR_SCOPE=history
        - CONNECTOR_CONFIDENCE_LEVEL=15
        - CONNECTOR_LOG_LEVEL=info
      restart: always

# basic definitions/data
  connector-opencti:
        image: opencti/connector-opencti:4.5.4
        container_name: connector-openCTI-dev
        environment:
          - OPENCTI_URL=http://opencti:${OPENCTI_PORT}
          - OPENCTI_TOKEN=${OPENCTI_ADMIN_TOKEN}
          - CONNECTOR_ID=${CONNECTOR_OPENCTI_ID}
          - CONNECTOR_TYPE=EXTERNAL_IMPORT
          - CONNECTOR_NAME=OpenCTI
          - CONNECTOR_SCOPE=marking-definition,identity,location
          - CONNECTOR_CONFIDENCE_LEVEL=100
          - CONNECTOR_UPDATE_EXISTING_DATA=true
          - CONNECTOR_LOG_LEVEL=info
          - CONFIG_SECTORS_FILE_URL=https://raw.githubusercontent.com/OpenCTI-Platform/datasets/master/data/sectors.json
          - CONFIG_GEOGRAPHY_FILE_URL=https://raw.githubusercontent.com/OpenCTI-Platform/datasets/master/data/geography.json
          - CONFIG_INTERVAL=7
        restart: always
    
  connector-mitre:
    image: opencti/connector-mitre:4.5.4
    container_name: connector-mitre-dev
    environment:
      - OPENCTI_URL=http://opencti:${OPENCTI_PORT}
      - OPENCTI_TOKEN=${OPENCTI_ADMIN_TOKEN}
      - CONNECTOR_ID=${CONNECTOR_MITRE_ID}
      - CONNECTOR_TYPE=EXTERNAL_IMPORT
      - CONNECTOR_NAME=MITRE ATT&CK
      - CONNECTOR_SCOPE=marking-definition,identity,attack-pattern,course-of-action,intrusion-set,campaign,malware,tool,report,external-reference-as-report
      - CONNECTOR_CONFIDENCE_LEVEL=15
      - CONNECTOR_UPDATE_EXISTING_DATA=true
      - CONNECTOR_LOG_LEVEL=info
      - MITRE_ENTERPRISE_FILE_URL=https://raw.githubusercontent.com/mitre/cti/master/enterprise-attack/enterprise-attack.json
      - MITRE_PRE_ATTACK_FILE_URL=https://raw.githubusercontent.com/mitre/cti/master/pre-attack/pre-attack.json
      - MITRE_MOBILE_ATTACK_FILE_URL=https://raw.githubusercontent.com/mitre/cti/master/mobile-attack/mobile-attack.json
      - MITRE_INTERVAL=7
    restart: always

  connector-cve:
      image: opencti/connector-cve:4.5.4
      container_name: connector-cve-dev
      environment:
        - OPENCTI_URL=http://opencti:${OPENCTI_PORT}
        - OPENCTI_TOKEN=${OPENCTI_ADMIN_TOKEN}
        - CONNECTOR_ID=${CONNECTOR_CVE_ID}
        - CONNECTOR_TYPE=EXTERNAL_IMPORT
        - CONNECTOR_NAME=Common Vulnerabilities and Exposures
        - CONNECTOR_SCOPE=identity,vulnerability
        - CONNECTOR_CONFIDENCE_LEVEL=75 # From 0 (Unknown) to 100 (Fully trusted)
        - CONNECTOR_UPDATE_EXISTING_DATA=true
        - CONNECTOR_LOG_LEVEL=info
        - CVE_IMPORT_HISTORY=true # Import history at the first run (after only recent), reset the connector state if you want to re-import
        - CVE_NVD_DATA_FEED=https://nvd.nist.gov/feeds/json/cve/1.1/nvdcve-1.1-recent.json.gz
        - CVE_HISTORY_DATA_FEED=https://nvd.nist.gov/feeds/json/cve/1.1/
        - CVE_INTERVAL=2 # In days, must be strictly greater than 1
      restart: always
nginx
version: "3"
services:
  web:
    image: nginx
    container_name: nginx-dev
    volumes:
      - ./nginx:/etc/nginx/conf.d
    ports:
      - 8080:80
    depends_on:
      - opencti
    restart: always

Proxy error occurred after docker compose up

Hi,
I have difficulties starting the opneCTI platform locally on my computer.
I'm using the docker approach it seems all the containers are up but when I'm starting the fronted project with yarn start I'm getting an error that says: " Error occurred while trying to proxy request /graphql from localhost:3000 to http://localhost:4000 (ECONNREFUSED)"

Do I miss a step?

In the frontend project after run "yarn start"
Screenshot from 2021-03-14 12-03-46

docker ps output
Screenshot from 2021-03-14 12-03-30

docker-compose up keeps restarting forever

I did the steps as in the documentation:

  1. clone the repo
  2. set vm.max_map_count=1048575, write it in /etc/sysctl.conf
  3. copy .env.sample to .env and set the variables using uuidgen command to generate the uuids (not using "" , dont know if that is correct)
  4. docker-compose up

then all the containers with names starting with "opencti/" keep restarting. The others start normally

in the error messages says "OpenCTI API is not reachable"

I did not modify the docker-compose.yml only the .env file.

I'm using Ubuntu 20.04 server. (16cores of cpu and 16GB of RAM virtualized in vSphere)

Improve the tutorial

Guys, please, improve the tutorial.
There is a huge miss information there!
After installing step-by-step the elasticsearch, then installing docker-compose, downloading docker opencti, writing .env file, I got the issue


$ docker logs docker_opencti_1

{"error":{"name":"ConfigurationError","_error":{},"_showLocations":false,"_showPath":false,"time_thrown":"2021-08-20T00:40:28.447Z","data":{"reason":"ElasticSearch seems down","http_status":500,"category":"technical","error":"connect ECONNREFUSED 172.26.0.3:9200"},"internalData":{}},"category":"APP","version":"4.5.5","level":"error","message":"[OPENCTI] Platform initialization fail","timestamp":"2021-08-20T00:40:28.449Z"}
{"error":{"name":"ConfigurationError","_error":{},"_showLocations":false,"_showPath":false,"time_thrown":"2021-08-20T00:40:53.385Z","data":{"reason":"ElasticSearch seems down","http_status":500,"category":"technical","error":"connect ECONNREFUSED 172.26.0.3:9200"},"internalData":{}},"category":"APP","version":"4.5.5","level":"error","message":"[OPENCTI] Platform initialization fail","timestamp":"2021-08-20T00:40:53.386Z"}

Even with elasticsearch up, I got this error and I'm not able to connect in http://<opencti_IP>:8080.

Redis taking memory issues

Hello,
Default opencti docker 5.5.4 install. Looks like redis is consuming most of the memory then crashing. Is there a way to limit redis ?

Opencti api is not reachable.

Hi,
I tried to install the platform using "docker" but i receive this error: "Opencti api is not reachable. waiting for opencti api to start or check your configuration".
The platform is accessible on port 8080 but I cannot use the connectors.
Someone can help me please?

Opencti api is not reachable.

My problem is that:
I'm trying to send data to my OpenCTI server from an external server. This data is in Stix format. But in my script, try to connect to my OpenCTI server and I have this problem:

from pycti import OpenCTIApiClient


# -----MAIN------
if __name__ == "__main__":
  # Variables
  api_url = "https://xxx.xxxx.xxx.xxx:80/" #IP
  api_token = "72327164-0b35-482b-b5d6-a5a3f76b845f" #connector_import_file_stix_id token /opencti-docker/.env

  # OpenCTI initialization
  opencti_api_client = OpenCTIApiClient(api_url, api_token)

Error:

INFO:root:Listing Threat-Actors with filters null.
Traceback (most recent call last):
  File "Main.py", line 14, in <module>
    opencti_api_client = OpenCTIApiClient(api_url, api_token)
  File "/usr/local/lib/python3.8/dist-packages/pycti/api/opencti_api_client.py", line 187, in __init__
    raise ValueError(
ValueError: OpenCTI API is not reachable. Waiting for OpenCTI API to start or check your configuration...
Killed

Also when I run sudo docker ps, I always see that the taxi container is ALWAYS restarting itself, it is normal? How can fix that?

macia.salva@macia:/opencti-docker$ sudo docker ps
CONTAINER ID   IMAGE                                                  COMMAND                  CREATED         STATUS                           PORTS                                                                  NAMES
fda6c2cb6569   opencti/worker:5.3.7                                   "python3 worker.py"      5 minutes ago   Up 2 minutes                                                                                            opencti-docker_worker_1
50093c606ec1   opencti/connector-import-file-stix:5.3.7               "/entrypoint.sh"         5 minutes ago   Up 3 minutes                                                                                            opencti-docker_connector-import-file-stix_1
3b37883968b4   opencti/connector-taxii2:5.3.10                        "/entrypoint.sh"         5 minutes ago   Restarting (137) 4 seconds ago                                                                          opencti-docker_connector-taxii2_1
1384c6b093b4   opencti/connector-export-file-csv:5.3.7                "/entrypoint.sh"         5 minutes ago   Up 3 minutes                                                                                            opencti-docker_connector-export-file-csv_1
dd925fd8985f   opencti/connector-import-document:5.3.7                "/entrypoint.sh"         5 minutes ago   Up 3 minutes                                                                                            opencti-docker_connector-import-document_1
0e500a0a2ada   opencti/connector-export-file-txt:5.3.7                "/entrypoint.sh"         5 minutes ago   Up 3 minutes                                                                                            opencti-docker_connector-export-file-txt_1
5e47c400283b   opencti/connector-export-file-stix:5.3.7               "/entrypoint.sh"         5 minutes ago   Up 3 minutes                                                                                            opencti-docker_connector-export-file-stix_1
819b356e635a   opencti/platform:5.3.7                                 "/sbin/tini -- node …"   5 minutes ago   Up 3 minutes                     0.0.0.0:80->80/tcp                                                     opencti-docker_opencti_1
a50a31c72817   rabbitmq:3.10-management                               "docker-entrypoint.s…"   5 minutes ago   Up 5 minutes                     4369/tcp, 5671-5672/tcp, 15671-15672/tcp, 15691-15692/tcp, 25672/tcp   opencti-docker_rabbitmq_1
3753db773f4c   docker.elastic.co/elasticsearch/elasticsearch:7.17.4   "/bin/tini -- /usr/l…"   5 minutes ago   Up 5 minutes                     9200/tcp, 9300/tcp                                                     opencti-docker_elasticsearch_1
18051af5bffe   redis:7.0.0                                            "docker-entrypoint.s…"   5 minutes ago   Up 5 minutes                     6379/tcp                                                               opencti-docker_redis_1
b6c4d9f092a3   minio/minio:RELEASE.2022-05-19T18-20-59Z               "/usr/bin/docker-ent…"   5 minutes ago   Up 5 minutes (healthy)           0.0.0.0:9000->9000/tcp                                                 opencti-docker_minio_1

Also, when I try to view logs of that container, I receive the same error:

macia.salva@macia:/opencti-docker$ sudo docker-compose logs 3b37883968b4
WARNING: Some services (worker) use the 'deploy' key, which will be ignored. Compose does not support 'deploy' configuration - use `docker stack deploy` to deploy to a swarm.
ERROR: No such service: 3b37883968b4
macia.salva@macia:/opencti-docker$ sudo docker logs 3b37883968b4
INFO:root:Listing Threat-Actors with filters null.
Traceback (most recent call last):
  File "/opt/opencti-taxii2/taxii2.py", line 318, in <module>
    raise e
  File "/opt/opencti-taxii2/taxii2.py", line 315, in <module>
    taxii2Connector = Taxii2Connector()
  File "/opt/opencti-taxii2/taxii2.py", line 31, in __init__
    self.helper = OpenCTIConnectorHelper(config)
  File "/usr/local/lib/python3.10/site-packages/pycti/connector/opencti_connector_helper.py", line 605, in __init__
    self.api = OpenCTIApiClient(
  File "/usr/local/lib/python3.10/site-packages/pycti/api/opencti_api_client.py", line 187, in __init__
    raise ValueError(
ValueError: OpenCTI API is not reachable. Waiting for OpenCTI API to start or check your configuration...
Killed

I have seen on issue 49 this:

 restart: always
  networks:
    - opencti-default

networks:
  opencti-default:
    external:
      name: opencti-default

Do you put this code on docker-compose.yml? Or do you add this configuration on another .yml?

When I do a sudo docker network ls I see this:

NETWORK ID     NAME                     DRIVER    SCOPE
96e44fc20fbc   bridge                   bridge    local
7d143d7f3fb4   docker_gwbridge          bridge    local
179fc1a9c349   host                     host      local
khb0xa5hueuq   ingress                  overlay   swarm
9867ff3e693a   none                     null      local
b92c20916768   opencti-docker_default   bridge    local

My docker-compose.yml:

version: '3'
services:
  redis:
    image: redis:7.0.0
    restart: always
    volumes:
      - redisdata:/data
  elasticsearch:
    image: docker.elastic.co/elasticsearch/elasticsearch:7.17.4
    volumes:
      - esdata:/usr/share/elasticsearch/data
    environment:
      # Comment out the line below for single-node
      - discovery.type=single-node
      - xpack.security.enabled=false
      # Uncomment line below below for a cluster of multiple nodes
      #- cluster.name=docker-cluster
      #- xpack.ml.enabled=false
      #- "ES_JAVA_OPTS=-Xms${ELASTIC_MEMORY_SIZE} -Xmx${ELASTIC_MEMORY_SIZE}"
    restart: always
    ulimits:
      memlock:
        soft: -1
        hard: -1
      nofile:
        soft: 65536
        hard: 65536
  minio:
    image: minio/minio:RELEASE.2022-05-19T18-20-59Z
    volumes:
      - s3data:/data
    ports:
      - "9000:9000"
    environment:
      MINIO_ROOT_USER: ${MINIO_ROOT_USER}
      MINIO_ROOT_PASSWORD: ${MINIO_ROOT_PASSWORD}
    command: server /data
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:9000/minio/health/live"]
      interval: 30s
      timeout: 20s
      retries: 3
    restart: always
  rabbitmq:
    image: rabbitmq:3.10-management
    environment:
      - RABBITMQ_DEFAULT_USER=${RABBITMQ_DEFAULT_USER}
      - RABBITMQ_DEFAULT_PASS=${RABBITMQ_DEFAULT_PASS}
    volumes:
      - amqpdata:/var/lib/rabbitmq
    restart: always
  opencti:
    image: opencti/platform:5.3.7
    environment:
  - NODE_OPTIONS=--max-old-space-size=8096
      - APP__PORT=80
      - APP__BASE_URL=${OPENCTI_BASE_URL}
      - APP__ADMIN__EMAIL=${OPENCTI_ADMIN_EMAIL}
      - APP__ADMIN__PASSWORD=${OPENCTI_ADMIN_PASSWORD}
      - APP__ADMIN__TOKEN=${OPENCTI_ADMIN_TOKEN}
      - APP__APP_LOGS__LOGS_LEVEL=error
      - REDIS__HOSTNAME=redis
      - REDIS__PORT=6379
      - ELASTICSEARCH__URL=http://elasticsearch:9200
      - MINIO__ENDPOINT=minio
      - MINIO__PORT=9000
      - MINIO__USE_SSL=false
      - MINIO__ACCESS_KEY=${MINIO_ROOT_USER}
      - MINIO__SECRET_KEY=${MINIO_ROOT_PASSWORD}
      - RABBITMQ__HOSTNAME=rabbitmq
      - RABBITMQ__PORT=5672
      - RABBITMQ__PORT_MANAGEMENT=15672
      - RABBITMQ__MANAGEMENT_SSL=false
      - RABBITMQ__USERNAME=${RABBITMQ_DEFAULT_USER}
      - RABBITMQ__PASSWORD=${RABBITMQ_DEFAULT_PASS}
      - SMTP__HOSTNAME=${SMTP_HOSTNAME}
      - SMTP__PORT=25
      - PROVIDERS__LOCAL__STRATEGY=LocalStrategy
    ports:
      - "80:80"
    depends_on:
      - redis
      - elasticsearch
      - minio
      - rabbitmq
    restart: always
  worker:
    image: opencti/worker:5.3.7
    environment:
      - OPENCTI_URL=http://opencti:80
      - OPENCTI_TOKEN=${OPENCTI_ADMIN_TOKEN}
      - WORKER_LOG_LEVEL=info
    depends_on:
      - opencti
    deploy:
      mode: replicated
      replicas: 3
    restart: always
  connector-export-file-stix:
    image: opencti/connector-export-file-stix:5.3.7
    environment:
 - OPENCTI_URL=http://opencti:80
      - OPENCTI_TOKEN=${OPENCTI_ADMIN_TOKEN}
      - CONNECTOR_ID=${CONNECTOR_EXPORT_FILE_STIX_ID} # Valid UUIDv4
      - CONNECTOR_TYPE=INTERNAL_EXPORT_FILE
      - CONNECTOR_NAME=ExportFileStix2
      - CONNECTOR_SCOPE=application/json
      - CONNECTOR_CONFIDENCE_LEVEL=15 # From 0 (Unknown) to 100 (Fully trusted)
      - CONNECTOR_LOG_LEVEL=info
    restart: always
    depends_on:
      - opencti
  connector-export-file-csv:
    image: opencti/connector-export-file-csv:5.3.7
    environment:
      - OPENCTI_URL=http://opencti:80
      - OPENCTI_TOKEN=${OPENCTI_ADMIN_TOKEN}
      - CONNECTOR_ID=${CONNECTOR_EXPORT_FILE_CSV_ID} # Valid UUIDv4
      - CONNECTOR_TYPE=INTERNAL_EXPORT_FILE
      - CONNECTOR_NAME=ExportFileCsv
      - CONNECTOR_SCOPE=text/csv
      - CONNECTOR_CONFIDENCE_LEVEL=15 # From 0 (Unknown) to 100 (Fully trusted)
      - CONNECTOR_LOG_LEVEL=info
    restart: always
    depends_on:
      - opencti
  connector-export-file-txt:
    image: opencti/connector-export-file-txt:5.3.7
    environment:
      - OPENCTI_URL=http://opencti:80
      - OPENCTI_TOKEN=${OPENCTI_ADMIN_TOKEN}
      - CONNECTOR_ID=${CONNECTOR_EXPORT_FILE_TXT_ID} # Valid UUIDv4
      - CONNECTOR_TYPE=INTERNAL_EXPORT_FILE
      - CONNECTOR_NAME=ExportFileTxt
      - CONNECTOR_SCOPE=text/plain
      - CONNECTOR_CONFIDENCE_LEVEL=15 # From 0 (Unknown) to 100 (Fully trusted)
      - CONNECTOR_LOG_LEVEL=info
    restart: always
    depends_on:
      - opencti
  connector-import-file-stix:
    image: opencti/connector-import-file-stix:5.3.7
    environment:
      - OPENCTI_URL=http://opencti:80
      - OPENCTI_TOKEN=${OPENCTI_ADMIN_TOKEN}
      - CONNECTOR_ID=${CONNECTOR_IMPORT_FILE_STIX_ID} # Valid UUIDv4
      - CONNECTOR_TYPE=INTERNAL_IMPORT_FILE
      - CONNECTOR_NAME=ImportFileStix
      - CONNECTOR_VALIDATE_BEFORE_IMPORT=true # Validate any bundle before import
      - CONNECTOR_SCOPE=application/json,text/xml
      - CONNECTOR_AUTO=true # Enable/disable auto-import of file
      - CONNECTOR_CONFIDENCE_LEVEL=15 # From 0 (Unknown) to 100 (Fully trusted)
      - CONNECTOR_LOG_LEVEL=info
    restart: always
    depends_on:
     - opencti
      connector-import-document:
    image: opencti/connector-import-document:5.3.7
    environment:
      - OPENCTI_URL=http://opencti:80
      - OPENCTI_TOKEN=${OPENCTI_ADMIN_TOKEN}
      - CONNECTOR_ID=${CONNECTOR_IMPORT_DOCUMENT_ID} # Valid UUIDv4
      - CONNECTOR_TYPE=INTERNAL_IMPORT_FILE
      - CONNECTOR_NAME=ImportDocument
      - CONNECTOR_VALIDATE_BEFORE_IMPORT=true # Validate any bundle before import
      - CONNECTOR_SCOPE=application/pdf,text/plain,text/html
      - CONNECTOR_AUTO=true # Enable/disable auto-import of file
      - CONNECTOR_ONLY_CONTEXTUAL=false # Only extract data related to an entity (a report, a threat actor, etc.)
      - CONNECTOR_CONFIDENCE_LEVEL=15 # From 0 (Unknown) to 100 (Fully trusted)
      - CONNECTOR_LOG_LEVEL=info
      - IMPORT_DOCUMENT_CREATE_INDICATOR=true
    restart: always
    depends_on:
      - opencti
  connector-taxii2:
    image: opencti/connector-taxii2:5.3.10
    environment:
      - OPENCTI_URL=http://opencti:80
      - OPENCTI_TOKEN=${OPENCTI_ADMIN_TOKEN}
      - CONNECTOR_ID=e32fbdbe-5a84-4da3-956b-b72522b6c2bf
      - CONNECTOR_TYPE=EXTERNAL_IMPORT
      - CONNECTOR_NAME=TAXII2
      - CONNECTOR_SCOPE=ipv4-addr,ipv6-addr,vulnerability,domain,url,file-sha256,file-md5,file-sha1
      - CONNECTOR_CONFIDENCE_LEVEL=15 # From 0 (Unknown) to 100 (Fully trusted)
      - CONNECTOR_UPDATE_EXISTING_DATA=true
      - CONNECTOR_LOG_LEVEL=debug
      - TAXII2_DISCOVERY_URL=http://opencti:8080/taxii2/api-bases/ # Required
        #- TAXII2_CERT_PATH=ChangeMe # Optional (.pem)
      - TAXII2_USERNAME=prueba # Required
      - TAXII2_PASSWORD=prueba
      - TAXII2_V21=true # Is TAXII v2.1
      - TAXII2_COLLECTIONS=*.* # Required
      - TAXII2_INITIAL_HISTORY=24 # Required, in hours
      - TAXII2_INTERVAL=100 # Required, in hours
      - TAXII2_VERIFY_SSL=true
      - TAXII2_CREATE_INDICATORS=true # Generate indicators for ingested observables
      - TAXII2_CREATE_OBSERVABLES=true # Generate observables for ingested indicators
    restart: always
    depends_on:
      - opencti
volumes:
  esdata:
  s3data:
  redisdata:
  amqpdata:

Regards !

Feature Request: Add support for passing docker secret files

@SamuelHassine
I noticed that when attempting to pass sensitive information as secrets files in docker swarm. That the OpenCTI web application itself did not appear support passing credentials or sensitive api tokens in a secure manner.

The other services such as Minio and RabbitMQ support this, but OpenCTI's Environment variables do not support passing a file.

This leads to issues where if the secrets files have been passed to other services that support it and the same path to the secrets file is added as a value to OpenCTI's env variables, this leads to signature or password mismatch issues.

Some of the following are of issue:

  • APP__ADMIN__PASSWORD
  • MINIO__ACCESS_KEY
  • APP__ADMIN__TOKEN
  • MINIO__ACCESS_KEY
  • MINIO__SECRET_KEY
  • RABBITMQ__PASSWORD

I'm proposing to have Env variables appended with _FILE to support passing docker secrets, or maintaining the current environment variables with automatic detection of secret/config files.

ElasticSearch crash all shard failed

With fresh new install of opencti via docker, with 14G of ram allocate to Elasticsearch, it keep crashing and restarting with the error "all shards failed"

Here is a log :

{"type": "server", "timestamp": "2022-08-03T07:05:20,666Z", "level": "WARN", "component": "r.suppressed", "cluster.name": "docker-cluster", "node.name": "c8e1441fb70c", "message": "path: /opencti_internal_objects*%2Copencti_stix_meta_objects*%2Copencti_stix_domain_objects*%2Copencti_stix_cyber_observables*%2Copencti_inferred_entities*/_search, params: {ignore_throttled=false, index=opencti_internal_objects*,opencti_stix_meta_objects*,opencti_stix_domain_objects*,opencti_stix_cyber_observables*,opencti_inferred_entities*, track_total_hits=true}", "cluster.uuid": "s_2g0qFWR46Al8298dbijA", "node.id": "iUZ6digiQpqSVsCubhetxw" , "stacktrace": ["org.elasticsearch.action.search.SearchPhaseExecutionException: all shards failed", "at org.elasticsearch.action.search.AbstractSearchAsyncAction.onPhaseFailure(AbstractSearchAsyncAction.java:713) [elasticsearch-7.17.4.jar:7.17.4]", "at org.elasticsearch.action.search.AbstractSearchAsyncAction.executeNextPhase(AbstractSearchAsyncAction.java:400) [elasticsearch-7.17.4.jar:7.17.4]", "at org.elasticsearch.action.search.AbstractSearchAsyncAction.onPhaseDone(AbstractSearchAsyncAction.java:745) [elasticsearch-7.17.4.jar:7.17.4]", "at org.elasticsearch.action.search.AbstractSearchAsyncAction.onShardFailure(AbstractSearchAsyncAction.java:497) [elasticsearch-7.17.4.jar:7.17.4]", "at org.elasticsearch.action.search.AbstractSearchAsyncAction.performPhaseOnShard(AbstractSearchAsyncAction.java:308) [elasticsearch-7.17.4.jar:7.17.4]", "at org.elasticsearch.action.search.AbstractSearchAsyncAction.run(AbstractSearchAsyncAction.java:244) [elasticsearch-7.17.4.jar:7.17.4]", "at org.elasticsearch.action.search.AbstractSearchAsyncAction.executePhase(AbstractSearchAsyncAction.java:454) [elasticsearch-7.17.4.jar:7.17.4]", "at org.elasticsearch.action.search.AbstractSearchAsyncAction.start(AbstractSearchAsyncAction.java:199) [elasticsearch-7.17.4.jar:7.17.4]", "at org.elasticsearch.action.search.CanMatchPreFilterSearchPhase.finishPhase(CanMatchPreFilterSearchPhase.java:406) [elasticsearch-7.17.4.jar:7.17.4]", "at org.elasticsearch.action.search.CanMatchPreFilterSearchPhase.access$1100(CanMatchPreFilterSearchPhase.java:66) [elasticsearch-7.17.4.jar:7.17.4]", "at org.elasticsearch.action.search.CanMatchPreFilterSearchPhase$Round.finishRound(CanMatchPreFilterSearchPhase.java:337) [elasticsearch-7.17.4.jar:7.17.4]", "at org.elasticsearch.action.search.CanMatchPreFilterSearchPhase$Round.onOperationFailed(CanMatchPreFilterSearchPhase.java:323) [elasticsearch-7.17.4.jar:7.17.4]", "at org.elasticsearch.action.search.CanMatchPreFilterSearchPhase$Round.doRun(CanMatchPreFilterSearchPhase.java:266) [elasticsearch-7.17.4.jar:7.17.4]", "at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:26) [elasticsearch-7.17.4.jar:7.17.4]", "at org.elasticsearch.action.search.CanMatchPreFilterSearchPhase.runCoordinatorRewritePhase(CanMatchPreFilterSearchPhase.java:190) [elasticsearch-7.17.4.jar:7.17.4]", "at org.elasticsearch.action.search.CanMatchPreFilterSearchPhase.run(CanMatchPreFilterSearchPhase.java:151) [elasticsearch-7.17.4.jar:7.17.4]", "at org.elasticsearch.action.search.CanMatchPreFilterSearchPhase$1.doRun(CanMatchPreFilterSearchPhase.java:487) [elasticsearch-7.17.4.jar:7.17.4]", "at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:777) [elasticsearch-7.17.4.jar:7.17.4]", "at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:26) [elasticsearch-7.17.4.jar:7.17.4]", "at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136) [?:?]", "at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) [?:?]", "at java.lang.Thread.run(Thread.java:833) [?:?]", "Caused by: org.elasticsearch.action.NoShardAvailableActionException",

client_secret missing for GoogleStrategy

  • configuring/deploying via docker-compose
  • for purposes of testing, using the docker-compose.yml file on a local machine via docker
  • attempting to configure GoogleStrategy plugin via docker-compose.yml:
- PROVIDERS__GOOGLE__STRATEGY=GoogleStrategy
- PROVIDERS__GOOGLE__CONFIG__CLIENT__ID=${GOOGLE_CLIENT_ID}
- PROVIDERS__GOOGLE__CONFIG__CLIENT__SECRET=${GOOGLE_CLIENT_SECRET}
- PROVIDERS__GOOGLE__CONFIG__CALLBACK_URL=${GOOGLE_CALLBACK_URL}
- PROVIDERS__LOCAL__STRATEGY=LocalStrategy
  • set these via .env file for testing

  • tried the strings with and without quotes

  • ERROR! "client_secret missing"

ERROR: manifest for opencti/platform:4.0.0 not found

Environment: Ubuntu 20.04 Server
Running on Docker.

Error Reproduce:

root@openctiserver:/home/administrator/opt/docker# docker-compose --compatibility up
Pulling opencti (opencti/platform:4.0.0)...
ERROR: manifest for opencti/platform:4.0.0 not found: manifest unknown: manifest unknown
root@openctiserver:/home/administrator/opt/docker#

No port 8080 opens

I followed your advice exactly. Still no go. Please suggest.

OPENCTI_ADMIN_EMAIL=[email protected]
OPENCTI_ADMIN_PASSWORD=xx
OPENCTI_ADMIN_TOKEN=904ccff2-4db2-42ab-81b6-a6820494319d
MINIO_ACCESS_KEY=xx
MINIO_SECRET_KEY=xx
RABBITMQ_DEFAULT_USER=xx
RABBITMQ_DEFAULT_PASS=xx
CONNECTOR_HISTORY_ID=904ccff2-4db2-42ab-81b6-a6820494319d
CONNECTOR_EXPORT_FILE_STIX_ID=904ccff2-4db2-42ab-81b6-a6820494319d
CONNECTOR_EXPORT_FILE_CSV_ID=904ccff2-4db2-42ab-81b6-a6820494319d
CONNECTOR_IMPORT_FILE_STIX_ID=904ccff2-4db2-42ab-81b6-a6820494319d
CONNECTOR_IMPORT_FILE_PDF_OBSERVABLES_ID=904ccff2-4db2-42ab-81b6-a6820494319d

Step 1.
Download the docker-compose.yml file

Step 2:
change the .env file

Step 3:
Open terminal and run
set -o allexport; source .env; set +o allexport

Step 4:
docker-compose --compatibility up

Step 5:
Wait 30 minutes +

Step 6:
Go to http://localhost:8080

Step 7:
You are screwed. It does not work.

Step 8:
Repeat the above in fresh Ubuntu with all the dependencies installed.

Step 9:
Screwed again. No port 8080 opens. Port 9000 opens though


Seriously provide some support here.

docker_opencti_1 error & restart loop

I just upgraded my environment to the latest version of the docker containers. Now, I can't start the OpenCTI container...
The error I see in logs is:

opencti_1                     | Error: getaddrinfo EAI_AGAIN  minio
opencti_1                     |     at GetAddrInfoReqWrap.onlookup [as oncomplete] (node:dns:71:26)

If I connect quickly on the container, I can resolve/ping the host 'minio'. The container 'minio' is up'n'running too.
Any idea?

Problem for basic installation

Hi,

We're a team of two students trying to install OpenCTI to later connect it to the DISARM platform.
We followed the simple Docker installation - https://github.com/OpenCTI-Platform/docker but couldn't manage to launch the platform...
When we run the docker-compose.
Some containers look to work pretty well (elasticsearch, redis, rabbit) and others don't.
We run it on a VM with ubuntu 22.04 ;
With docker-compose version 1.29.2.

The global error looks to be : ValueError: OpenCTI API is not reachable.

Thanks a LOT for your help, we are beginners so it might be possible that we did a lot of rookie mistakes.

Here are our summary for logs, and at the end our environment fail :

openCTI platform :
{"category":"APP","error":{"context":{"category":"technical","error":"connect ECONNREFUSED 172.18.0.2:9200","http_status":500,"reason":"[SEARCH] Search engine seems down"},"message":"A configuration error has occurred","name":"ConfigurationError","stack":"ConfigurationError: A configuration error has occurred\n at error (/opt/opencti/build/src/config/errors.js:8:10)\n at ConfigurationError (/opt/opencti/build/src/config/errors.js:54:53)\n at /opt/opencti/build/src/database/engine.js:171:15\n at processTicksAndRejections (node:internal/process/task_queues:95:5)\n at searchEngineInit (/opt/opencti/build/src/database/engine.js:161:3)\n at checkSystemDependencies (/opt/opencti/build/src/initialization.js:129:3)\n at boot (/opt/opencti/build/src/boot.js:10:5)"},"level":"error","message":"[OPENCTI] Platform start fail","timestamp":"2023-01-30T14:44:39.554Z","version":"5.5.2"}

Worker opencti :
Traceback (most recent call last):
File "/opt/opencti-worker/worker.py", line 522, in
worker = Worker()
File "", line 6, in init
File "/opt/opencti-worker/worker.py", line 430, in post_init
self.api = OpenCTIApiClient(
File "/usr/local/lib/python3.9/site-packages/pycti/api/opencti_api_client.py", line 198, in init
raise ValueError(
ValueError: OpenCTI API is not reachable. Waiting for OpenCTI API to start or check your configuration...
INFO:root:Listing Threat-Actors with filters null.

Minio :
ERROR Unable to validate credentials inherited from the shell environment: Invalid credentials
> Please provide correct credentials
HINT:
Access key length should be at least 3, and secret key length at least 8 characters

docker_connector-export-file-txt_1 :
INFO:root:Listing Threat-Actors with filters null.
OpenCTI API is not reachable. Waiting for OpenCTI API to start or check your configuration...

docker_connector-export-file-stix_1 :
INFO:root:Listing Threat-Actors with filters null.
OpenCTI API is not reachable. Waiting for OpenCTI API to start or check your configuration...

docker_connector-import-file-stix_1 :
Listing Threat-Actors with filters null.
OpenCTI API is not reachable. Waiting for OpenCTI API to start or check your configuration...

docker_connector-import-document_1 :
ValueError: OpenCTI API is not reachable. Waiting for OpenCTI API to start or check your configuration...
Killed

openCTI platform :
{"category":"APP","error":{"context":{"category":"technical","error":"connect ECONNREFUSED 172.18.0.2:9200","http_status":500,"reason":"[SEARCH] Search engine seems down"},"message":"A configuration error has occurred","name":"ConfigurationError","stack":"ConfigurationError: A configuration error has occurred\n at error (/opt/opencti/build/src/config/errors.js:8:10)\n at ConfigurationError (/opt/opencti/build/src/config/errors.js:54:53)\n at /opt/opencti/build/src/database/engine.js:171:15\n at processTicksAndRejections (node:internal/process/task_queues:95:5)\n at searchEngineInit (/opt/opencti/build/src/database/engine.js:161:3)\n at checkSystemDependencies (/opt/opencti/build/src/initialization.js:129:3)\n at boot (/opt/opencti/build/src/boot.js:10:5)"},"level":"error","message":"[OPENCTI] Platform start fail","timestamp":"2023-01-30T14:44:39.554Z","version":"5.5.2"}

Worker opencti :
Traceback (most recent call last):
File "/opt/opencti-worker/worker.py", line 522, in
worker = Worker()
File "", line 6, in init
File "/opt/opencti-worker/worker.py", line 430, in post_init
self.api = OpenCTIApiClient(
File "/usr/local/lib/python3.9/site-packages/pycti/api/opencti_api_client.py", line 198, in init
raise ValueError(
ValueError: OpenCTI API is not reachable. Waiting for OpenCTI API to start or check your configuration...
INFO:root:Listing Threat-Actors with filters null.

Minio :
ERROR Unable to validate credentials inherited from the shell environment: Invalid credentials
> Please provide correct credentials
HINT:
Access key length should be at least 3, and secret key length at least 8 characters

docker_connector-export-file-txt_1 :
INFO:root:Listing Threat-Actors with filters null.
OpenCTI API is not reachable. Waiting for OpenCTI API to start or check your configuration...

docker_connector-export-file-stix_1 :
INFO:root:Listing Threat-Actors with filters null.
OpenCTI API is not reachable. Waiting for OpenCTI API to start or check your configuration...

docker_connector-import-file-stix_1 :
Listing Threat-Actors with filters null.
OpenCTI API is not reachable. Waiting for OpenCTI API to start or check your configuration...

docker_connector-import-document_1 :
ValueError: OpenCTI API is not reachable. Waiting for OpenCTI API to start or check your configuration...
Killed

Env && yml files :

OPENCTI_ADMIN_EMAIL=[email protected]
OPENCTI_ADMIN_PASSWORD=tototata
OPENCTI_ADMIN_TOKEN=c41ca777-667b-4421-952f-92f2f5a75485
MINIO_ROOT_USER=375a2a4b-652e-46be-9f70-4500e65bce89
MINIO_ROOT_PASSWORD=448661fa-2278-4061-b20f-3158180e1885
RABBITMQ_DEFAULT_USER=guest
RABBITMQ_DEFAULT_PASS=guest
CONNECTOR_HISTORY_ID=8a7c7bab-e286-4535-b84d-fabc5071029f
CONNECTOR_EXPORT_FILE_STIX_ID=9a34fbf2-f689-4227-b505-41cf4752b10e
CONNECTOR_EXPORT_FILE_CSV_ID=c1618ac3-97b6-44dd-979b-870f20076a8b
CONNECTOR_IMPORT_FILE_STIX_ID=09e40517-fc17-4de6-ad09-104aa7dd4f90
CONNECTOR_IMPORT_REPORT_ID=9d5f694c-c295-46a0-ba4d-5142fc859bdf

version: '3'
services:
redis:
image: redis:7.0.6
restart: always
volumes:
- redisdata:/data
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:8.5.3
volumes:
- esdata:/usr/share/elasticsearch/data
environment:
# Comment out the line below for single-node
- discovery.type=single-node
# Uncomment line below below for a cluster of multiple nodes
# - cluster.name=docker-cluster
- xpack.ml.enabled=false
- xpack.security.enabled=false
- "ES_JAVA_OPTS=-Xms${ELASTIC_MEMORY_SIZE} -Xmx${ELASTIC_MEMORY_SIZE}"
restart: always
ulimits:
memlock:
soft: -1
hard: -1
nofile:
soft: 65536
hard: 65536
minio:
image: minio/minio:RELEASE.2022-09-25T15-44-53Z
volumes:
- s3data:/data
ports:
- "9000:9000"
environment:
MINIO_ROOT_USER: ${MINIO_ROOT_USER}
MINIO_ROOT_PASSWORD: ${MINIO_ROOT_PASSWORD}
command: server /data
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:9000/minio/health/live"]
interval: 30s
timeout: 20s
retries: 3
restart: always
rabbitmq:
image: rabbitmq:3.11-management
environment:
- RABBITMQ_DEFAULT_USER=${RABBITMQ_DEFAULT_USER}
- RABBITMQ_DEFAULT_PASS=${RABBITMQ_DEFAULT_PASS}
volumes:
- amqpdata:/var/lib/rabbitmq
restart: always
opencti:
image: opencti/platform:5.5.2
environment:
- NODE_OPTIONS=--max-old-space-size=8096
- APP__PORT=8080
- APP__BASE_URL=${OPENCTI_BASE_URL}
- APP__ADMIN__EMAIL=${OPENCTI_ADMIN_EMAIL}
- APP__ADMIN__PASSWORD=${OPENCTI_ADMIN_PASSWORD}
- APP__ADMIN__TOKEN=${OPENCTI_ADMIN_TOKEN}
- APP__APP_LOGS__LOGS_LEVEL=error
- REDIS__HOSTNAME=redis
- REDIS__PORT=6379
- ELASTICSEARCH__URL=http://elasticsearch:9200
- MINIO__ENDPOINT=minio
- MINIO__PORT=9000
- MINIO__USE_SSL=false
- MINIO__ACCESS_KEY=${MINIO_ROOT_USER}
- MINIO__SECRET_KEY=${MINIO_ROOT_PASSWORD}
- RABBITMQ__HOSTNAME=rabbitmq
- RABBITMQ__PORT=5672
- RABBITMQ__PORT_MANAGEMENT=15672
- RABBITMQ__MANAGEMENT_SSL=false
- RABBITMQ__USERNAME=${RABBITMQ_DEFAULT_USER}
- RABBITMQ__PASSWORD=${RABBITMQ_DEFAULT_PASS}
- SMTP__HOSTNAME=${SMTP_HOSTNAME}
- SMTP__PORT=25
- PROVIDERS__LOCAL__STRATEGY=LocalStrategy
ports:
- "8080:8080"
depends_on:
- redis
- elasticsearch
- minio
- rabbitmq
restart: always
worker:
image: opencti/worker:5.5.2
environment:
- OPENCTI_URL=http://opencti:8080
- OPENCTI_TOKEN=${OPENCTI_ADMIN_TOKEN}
- WORKER_LOG_LEVEL=info
depends_on:
- opencti
deploy:
mode: replicated
replicas: 3
restart: always
connector-export-file-stix:
image: opencti/connector-export-file-stix:5.5.2
environment:
- OPENCTI_URL=http://opencti:8080
- OPENCTI_TOKEN=${OPENCTI_ADMIN_TOKEN}
- CONNECTOR_ID=${CONNECTOR_EXPORT_FILE_STIX_ID} # Valid UUIDv4
- CONNECTOR_TYPE=INTERNAL_EXPORT_FILE
- CONNECTOR_NAME=ExportFileStix2
- CONNECTOR_SCOPE=application/json
- CONNECTOR_CONFIDENCE_LEVEL=15 # From 0 (Unknown) to 100 (Fully trusted)
- CONNECTOR_LOG_LEVEL=info
restart: always
depends_on:
- opencti
connector-export-file-csv:
image: opencti/connector-export-file-csv:5.5.2
environment:
- OPENCTI_URL=http://opencti:8080
- OPENCTI_TOKEN=${OPENCTI_ADMIN_TOKEN}
- CONNECTOR_ID=${CONNECTOR_EXPORT_FILE_CSV_ID} # Valid UUIDv4
- CONNECTOR_TYPE=INTERNAL_EXPORT_FILE
- CONNECTOR_NAME=ExportFileCsv
- CONNECTOR_SCOPE=text/csv
- CONNECTOR_CONFIDENCE_LEVEL=15 # From 0 (Unknown) to 100 (Fully trusted)
- CONNECTOR_LOG_LEVEL=info
restart: always
depends_on:
- opencti
connector-export-file-txt:
image: opencti/connector-export-file-txt:5.5.2
environment:
- OPENCTI_URL=http://opencti:8080
- OPENCTI_TOKEN=${OPENCTI_ADMIN_TOKEN}
- CONNECTOR_ID=${CONNECTOR_EXPORT_FILE_TXT_ID} # Valid UUIDv4
- CONNECTOR_TYPE=INTERNAL_EXPORT_FILE
- CONNECTOR_NAME=ExportFileTxt
- CONNECTOR_SCOPE=text/plain
- CONNECTOR_CONFIDENCE_LEVEL=15 # From 0 (Unknown) to 100 (Fully trusted)
- CONNECTOR_LOG_LEVEL=info
restart: always
depends_on:
- opencti
connector-import-file-stix:
image: opencti/connector-import-file-stix:5.5.2
environment:
- OPENCTI_URL=http://opencti:8080
- OPENCTI_TOKEN=${OPENCTI_ADMIN_TOKEN}
- CONNECTOR_ID=${CONNECTOR_IMPORT_FILE_STIX_ID} # Valid UUIDv4
- CONNECTOR_TYPE=INTERNAL_IMPORT_FILE
- CONNECTOR_NAME=ImportFileStix
- CONNECTOR_VALIDATE_BEFORE_IMPORT=true # Validate any bundle before import
- CONNECTOR_SCOPE=application/json,text/xml
- CONNECTOR_AUTO=true # Enable/disable auto-import of file
- CONNECTOR_CONFIDENCE_LEVEL=15 # From 0 (Unknown) to 100 (Fully trusted)
- CONNECTOR_LOG_LEVEL=info
restart: always
depends_on:
- opencti
connector-import-document:
image: opencti/connector-import-document:5.5.2
environment:
- OPENCTI_URL=http://opencti:8080
- OPENCTI_TOKEN=${OPENCTI_ADMIN_TOKEN}
- CONNECTOR_ID=${CONNECTOR_IMPORT_DOCUMENT_ID} # Valid UUIDv4
- CONNECTOR_TYPE=INTERNAL_IMPORT_FILE
- CONNECTOR_NAME=ImportDocument
- CONNECTOR_VALIDATE_BEFORE_IMPORT=true # Validate any bundle before import
- CONNECTOR_SCOPE=application/pdf,text/plain,text/html
- CONNECTOR_AUTO=true # Enable/disable auto-import of file
- CONNECTOR_ONLY_CONTEXTUAL=false # Only extract data related to an entity (a report, a threat actor, etc.)
- CONNECTOR_CONFIDENCE_LEVEL=15 # From 0 (Unknown) to 100 (Fully trusted)
- CONNECTOR_LOG_LEVEL=info
- IMPORT_DOCUMENT_CREATE_INDICATOR=true
restart: always
depends_on:
- opencti

volumes:
esdata:
s3data:
redisdata:
amqpdata:

Disable LocalStrategy Login and Force only OpenID

Hey Team,

Is there a way to disable the LocalStrategy login so that the only login option provided is the OpenID? I have the OpenID Strategy working correctly but when I load the login page I am still prompted to be able to use local credentials. Is there a way to disable the local credentials to where only OpenID can be used?

image

Below are my Open_ID variables within my Dockerfile

image

The OpenID strategy functions correctly but I am hoping there is a way to disable a user from being able to locally authenticate. Is there a variable that I am missing?

Thanks for the help.

Best Regards,
Taylor

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.