Git Product home page Git Product logo

Comments (9)

abondoa avatar abondoa commented on August 17, 2024 2

@gpolaert Setting logging.tosyslog to false, seemed to do the trick for me. Thanks a lot!

from dockbeat.

abondoa avatar abondoa commented on August 17, 2024 1

I am having the same issue - getting 'Error opening syslog: Unix syslog delivery error' when i check docker logs.

I have accessed the container and I can access the elasticsearch container:
root@b783b39665de:/etc/dockbeat# curl http://elasticsearch:9200
{
"name" : "Two-Gun Kid",
"cluster_name" : "elasticsearch",
"version" : {
"number" : "2.4.0",
"build_hash" : "ce9f0c7394dee074091dd1bc4e9469251181fc55",
"build_timestamp" : "2016-08-29T09:14:17Z",
"build_snapshot" : false,
"lucene_version" : "5.5.2"
},
"tagline" : "You Know, for Search"
}

from dockbeat.

marminthibaut avatar marminthibaut commented on August 17, 2024

You should try to add a volume with a custom dockbeat configuration file, defining your existing elasticsearch instance(s) as output.elasticsearch.hosts

from dockbeat.

cabrinoob avatar cabrinoob commented on August 17, 2024

Ok, so I can ignore the --link in the launching of the dockbeat container?

from dockbeat.

marminthibaut avatar marminthibaut commented on August 17, 2024

absolutely 👍

from dockbeat.

cabrinoob avatar cabrinoob commented on August 17, 2024

Ok, here is my dockbeat.yml file :

################### Dockbeat Configuration Example #########################

############################# Dockbeat ######################################

  dockbeat:
  # Defines how often a docker stat is sent to the output
  period: ${PERIOD:5}

  # Defines the docker socket path
  # By default, this will get the unix:///var/run/docker.sock
  socket: ${DOCKER_SOCKET:unix:///var/run/docker.sock}

  # If dockbeat has to deal with a TLS-enabled docker daemon, you need to enable TLS and configure path for key and certificates.
  tls:
    # By default, TLS is disabled
    enable: ${DOCKER_ENABLE_TLS:false}

    # Path to the ca file
    ca_path: ${DOCKER_CA_PATH}

    # Path to the cert file
    cert_path: ${DOCKER_CERT_PATH}

    # Path to the key file
    key_path: ${DOCKER_KEY_PATH}

  # Enable or disable stats shipping
  stats:
    container: true
    net: true
    memory: true
    blkio: true
    cpu: true
###############################################################################
############################# Libbeat Config ##################################
# Base config file used by all other beats for using libbeat features

############################# Output ##########################################

# Configure what outputs to use when sending the data collected by the beat.
# Multiple outputs may be used.
output:

  ### Elasticsearch as output
  elasticsearch:
    # Array of hosts to connect to.
    # Scheme and port can be left out and will be set to the default (http and 9200)
    # In case you specify and additional path, the scheme is required: http://localhost:9200/path
    # IPv6 addresses should always be defined as: https://[2001:db8::1]:9200
    hosts: ["http://myelastic.com:9200"]

    # Optional protocol and basic auth credentials.
    #protocol: "https"
    #username: "admin"
    #password: "s3cr3t"

    # Dictionary of HTTP parameters to pass within the url with index operations.
    #parameters:
      #param1: value1
      #param2: value2

    # Number of workers per Elasticsearch host.
    #worker: 1

    # Optional index name. The default is "dockbeat" and generates
    # [dockbeat-]YYYY.MM.DD keys.
    #index: "dockbeat"

    # A template is used to set the mapping in Elasticsearch
    # By default template loading is disabled and no template is loaded.
    # These settings can be adjusted to load your own template or overwrite existing ones
    #template:

      # Template name. By default the template name is dockbeat.
      #name: "dockbeat"

      # Path to template file
      #path: "dockbeat.template.json"

      # Overwrite existing template
      #overwrite: false

    # Optional HTTP Path
    #path: "/elasticsearch"

    # Proxy server url
    #proxy_url: http://proxy:3128

    # The number of times a particular Elasticsearch index operation is attempted. If
    # the indexing operation doesn't succeed after this many retries, the events are
    # dropped. The default is 3.
    #max_retries: 3

    # The maximum number of events to bulk in a single Elasticsearch bulk API index request.
    # The default is 50.
    #bulk_max_size: 50

    # Configure http request timeout before failing an request to Elasticsearch.
    #timeout: 90

    # The number of seconds to wait for new events between two bulk API index requests.
    # If `bulk_max_size` is reached before this interval expires, addition bulk index
    # requests are made.
    #flush_interval: 1

    # Boolean that sets if the topology is kept in Elasticsearch. The default is
    # false. This option makes sense only for Packetbeat.
    #save_topology: false

    # The time to live in seconds for the topology information that is stored in
    # Elasticsearch. The default is 15 seconds.
    #topology_expire: 15

    # tls configuration. By default is off.
    #tls:
      # List of root certificates for HTTPS server verifications
      #certificate_authorities: ["/etc/pki/root/ca.pem"]

      # Certificate for TLS client authentication
      #certificate: "/etc/pki/client/cert.pem"

      # Client Certificate Key
      #certificate_key: "/etc/pki/client/cert.key"

      # Controls whether the client verifies server certificates and host name.
      # If insecure is set to true, all server host names and certificates will be
      # accepted. In this mode TLS based connections are susceptible to
      # man-in-the-middle attacks. Use only for testing.
      #insecure: true

      # Configure cipher suites to be used for TLS connections
      #cipher_suites: []

      # Configure curve types for ECDHE based cipher suites
      #curve_types: []

      # Configure minimum TLS version allowed for connection to logstash
      #min_version: 1.0

      # Configure maximum TLS version allowed for connection to logstash
      #max_version: 1.2


  ### Logstash as output
  #logstash:
    # The Logstash hosts
    #hosts: ["localhost:5044"]

    # Number of workers per Logstash host.
    #worker: 1

    # Set gzip compression level.
    #compression_level: 3

    # Optional load balance the events between the Logstash hosts
    #loadbalance: true

    # Optional index name. The default index name is set to name of the beat
    # in all lowercase.
    #index: dockbeat

    # SOCKS5 proxy server URL
    #proxy_url: socks5://user:password@socks5-server:2233

    # Resolve names locally when using a proxy server. Defaults to false.
    #proxy_use_local_resolver: false

    # Optional TLS. By default is off.
    #tls:
      # List of root certificates for HTTPS server verifications
      #certificate_authorities: ["/etc/pki/root/ca.pem"]

      # Certificate for TLS client authentication
      #certificate: "/etc/pki/client/cert.pem"

      # Client Certificate Key
      #certificate_key: "/etc/pki/client/cert.key"

      # Controls whether the client verifies server certificates and host name.
      # If insecure is set to true, all server host names and certificates will be
      # accepted. In this mode TLS based connections are susceptible to
      # man-in-the-middle attacks. Use only for testing.
      #insecure: true

      # Configure cipher suites to be used for TLS connections
      #cipher_suites: []

      # Configure curve types for ECDHE based cipher suites
      #curve_types: []


  ### File as output
  #file:
    # Path to the directory where to save the generated files. The option is mandatory.
    #path: "/tmp/dockbeat"

    # Name of the generated files. The default is `dockbeat` and it generates files: `dockbeat`, `dockbeat.1`, `dockbeat.2`, etc.
    #filename: dockbeat

    # Maximum size in kilobytes of each file. When this size is reached, the files are
    # rotated. The default value is 10240 kB.
    #rotate_every_kb: 10000

    # Maximum number of files under path. When this number of files is reached, the
    # oldest file is deleted and the rest are shifted from last to first. The default
    # is 7 files.
    #number_of_files: 7


  ### Console output
  # console:
    # Pretty print json event
    #pretty: false


############################# Shipper #########################################

shipper:
  # The name of the shipper that publishes the network data. It can be used to group
  # all the transactions sent by a single shipper in the web interface.
  # If this options is not defined, the hostname is used.
  #name:

  # The tags of the shipper are included in their own field with each
  # transaction published. Tags make it easy to group servers by different
  # logical properties.
  #tags: ["service-X", "web-tier"]

  # Optional fields that you can specify to add additional information to the
  # output. Fields can be scalar values, arrays, dictionaries, or any nested
  # combination of these.
  #fields:
  #  env: staging

  # If this option is set to true, the custom fields are stored as top-level
  # fields in the output document instead of being grouped under a fields
  # sub-dictionary. Default is false.
  #fields_under_root: false

  # Uncomment the following if you want to ignore transactions created
  # by the server on which the shipper is installed. This option is useful
  # to remove duplicates if shippers are installed on multiple servers.
  #ignore_outgoing: true

  # How often (in seconds) shippers are publishing their IPs to the topology map.
  # The default is 10 seconds.
  #refresh_topology_freq: 10

  # Expiration time (in seconds) of the IPs published by a shipper to the topology map.
  # All the IPs will be deleted afterwards. Note, that the value must be higher than
  # refresh_topology_freq. The default is 15 seconds.
  #topology_expire: 15

  # Internal queue size for single events in processing pipeline
  #queue_size: 1000

  # Sets the maximum number of CPUs that can be executing simultaneously. The
  # default is the number of logical CPUs available in the system.
  #max_procs:

  # Configure local GeoIP database support.
  # If no paths are not configured geoip is disabled.
  #geoip:
    #paths:
    #  - "/usr/share/GeoIP/GeoLiteCity.dat"
    #  - "/usr/local/var/GeoIP/GeoLiteCity.dat"


############################# Logging #########################################

# There are three options for the log output: syslog, file, stderr.
# Under Windows systems, the log files are per default sent to the file output,
# under all other system per default to syslog.
logging:

  # Send all logging output to syslog. On Windows default is false, otherwise
  # default is true.
  #to_syslog: false

  # Write all logging output to files. Beats automatically rotate files if rotateeverybytes
  # limit is reached.
  #to_files: true

  # To enable logging to files, to_files option has to be set to true
  files:
    # The directory where the log files will written to.
    path: /var/log/mybeat

    # The name of the files where the logs are written to.
    #name: mybeat

    # Configure log file size limit. If limit is reached, log file will be
    # automatically rotated
    rotateeverybytes: 10485760 # = 10MB

    # Number of rotated log files to keep. Oldest files will be deleted first.
    #keepfiles: 7

  # Enable debug output for selected components. To enable all selectors use ["*"]
  # Other available selectors are beat, publish, service
  # Multiple selectors can be chained.
  #selectors: [ ]

  # Sets log level. The default log level is error.
  # Available log levels are: critical, error, warning, info, debug
  #level: error

I have just modifed the "hosts" property.

I launch it thriugh this command :

docker run -d / -v /var/run/docker.sock:/var/run/docker.sock / -v /etc/dockbeat:/etc/dockbeat / -v /var/logs/dockbeat:/var/log/dockbeat --name dockbeat ingensi/dockbeat

Nothing happens, and when I display docker logs of the dockbeat container I have :

Error opening syslog: Unix syslog delivery error

But, as there is no "dockbeat" index created in my cluster I supposed that the container is crashed but I I no error message except this one.

from dockbeat.

marminthibaut avatar marminthibaut commented on August 17, 2024

Is should work only if the dockerbeat container can access to the elasticsearch cluster from inside.

Can you go into your container and try a curl 9200?

docker exec -it your-container sh to go into your container

from dockbeat.

cabrinoob avatar cabrinoob commented on August 17, 2024

I have access from inside the container to my elasticsearch cluster.

from dockbeat.

gpolaert avatar gpolaert commented on August 17, 2024

@abondoa @cabrinoob Think is not related to your ES cluster. Could you try to set logging.to_syslog to false. I think you don't have a local syslog in your container/OS installed. Keep me notified.

from dockbeat.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.