Git Product home page Git Product logo

Comments (15)

EspenAlbert avatar EspenAlbert commented on August 26, 2024 2

Hi, I was banging my head against the wall here while trying out custom log_group_name, here is a little warning for the next person wanting to customize it 😅
If you try to follow the previous comment note the following:

  1. Section [OUTPUT].Name he is using is for the old cloudwatch plugin living here and not this repo (cloudwatch_logs) output.
  2. The log_group_name might not work for the cloudwatch_logs output if it includes any symbol != ., after the variable name (e.g., dev/$(kubernetes['namespace_name'])/$(kubernetes['container_name']) will NOT work) please see the documentation on Limitations of record_accessor syntax

from aws-for-fluent-bit.

PettitWesley avatar PettitWesley commented on August 26, 2024 2

@EspenAlbert Thank you for your comment!

Some relevant links:

from aws-for-fluent-bit.

PettitWesley avatar PettitWesley commented on August 26, 2024 1

@michaelm-88 Sorry, missed this. I think quotes around your log group name might be the problem.

from aws-for-fluent-bit.

PettitWesley avatar PettitWesley commented on August 26, 2024

@ismailyenigul You have multiple outputs.

Usually you would have tail input like:

[INPUT]
    Name                    tail
    Tag                        kube.*
    Path                      /var/log/containers/*.log
    DB                         /var/log/flb_kube.db
    Parser                   docker
    Docker_Mode       On
    Mem_Buf_Limit     5MB
    Skip_Long_Lines   On
    Refresh_Interval    10

As explained here, the log files are named: <pod_name>_<namespace>_<container_id>

This means that the tag should become: kube.<pod_name>_<namespace>_<container_id>.

In Fluent Bit, outputs only send records for tags which they match: https://docs.fluentbit.io/manual/administration/configuring-fluent-bit/configuration-file

You can use Match or Match_Regex to match the tag. So you should able to create a second output with a match rule that only matches certain pods. And then you can exclude those pods with the match pattern for the other output.

from aws-for-fluent-bit.

ismailyenigul avatar ismailyenigul commented on August 26, 2024

Thanks for the quick reply!
Can you give me match example to match namespaces stage and prod in first output
and the other namespaces will be in second output

from aws-for-fluent-bit.

ismailyenigul avatar ismailyenigul commented on August 26, 2024

I could able to match stage and prod namespaces but could not find a way to exclude them in second output match

Match_Regex   kube.*_(stage|prod)_*

from aws-for-fluent-bit.

PettitWesley avatar PettitWesley commented on August 26, 2024

@ismailyenigul Actually... thinking about it some more... regular expressions are not good for doing negative matches.

Do the other log tags fit some pattern?

If not, then we could use rewrite_tag to change the tag for the tag for the logs which you can match- say to something like stage-logs. Then your "catch-all output" can match the original tag- kube* and the specific output for prod/stage logs can match the tag created by rewrite_tag

from aws-for-fluent-bit.

ismailyenigul avatar ismailyenigul commented on August 26, 2024

Hi @PettitWesley
I agree about negative matches.
Do we need to change docker image configs to use rewrite_tag or is it a configmap update?
The alternative solution could be listing all other namespaces manually in the second output.

from aws-for-fluent-bit.

PettitWesley avatar PettitWesley commented on August 26, 2024

@ismailyenigul Its a filter plugin which you can add in your fluent bit config in your config map: https://docs.fluentbit.io/manual/pipeline/filters/rewrite-tag

Try this:

[FILTER]
    Name          rewrite_tag
    Match         kube*
    rule $kubernetes['labels']   ^(.+)$     app_pods false
    Emitter_Name  re_emitted

I'm not actually clear on the exact details of your use case... but the above will change the tag for any logs that have a kubernetes.labels key to app_pods. Your use case is that some pods have the labels key and some don't right?

Logs that don't have a kubernetes.labels would keep their original kube* tag. With this you can now write two outputs.

from aws-for-fluent-bit.

khacminh avatar khacminh commented on August 26, 2024

I tried the approach of having multiple log groups for multiple namespaces. However, due to this bug, the kubernetes metadata is omitted from the output

from aws-for-fluent-bit.

michaelm-88 avatar michaelm-88 commented on August 26, 2024

@PettitWesley

can you help me please with the below error

2021/06/29 10:41:05] [debug] [http_client] server logs.us-east-1.amazonaws.com:443 will close connection #74
│ [2021/06/29 10:41:05] [debug] [aws_client] logs.us-east-1.amazonaws.com: http_do=0, HTTP Status: 400 │
│ [2021/06/29 10:41:05] [debug] [output:cloudwatch_logs:cloudwatch_logs.0] CreateLogGroup http status=400 │
│ [2021/06/29 10:41:05] [error] [output:cloudwatch_logs:cloudwatch_logs.0] CreateLogGroup API responded with error='SerializationException' │
│ [2021/06/29 10:41:05] [error] [output:cloudwatch_logs:cloudwatch_logs.0] Failed to create log group │
│ [2021/06/29 10:41:05] [debug] [task] created task=0x7fc2a9639a80 id=11 OK

and my output is

[OUTPUT]
Name cloudwatch_logs
Match host.*
region ${REGION}
log_group_name "/aws/containerinsights/${CLUSTER_NAME}/host"
log_stream_prefix ${tag}-${record["hostname"]}
auto_create_group true

from aws-for-fluent-bit.

PettitWesley avatar PettitWesley commented on August 26, 2024

@michaelm-88

log_stream_prefix ${tag}-${record["hostname"]}

This is not supported. You have to use the cloudwatch plugin to use templating: https://github.com/aws/amazon-cloudwatch-logs-for-fluent-bit#templating-log-group-and-stream-names

from aws-for-fluent-bit.

PettitWesley avatar PettitWesley commented on August 26, 2024

And in the CloudWatch plugin, the field name that accepts templating is log_stream_name and log_group_name @michaelm-88

from aws-for-fluent-bit.

michaelm-88 avatar michaelm-88 commented on August 26, 2024

hi @PettitWesley , please find below the config and let me know what is wrong
and I know I should be post it at
fluent/fluent-bit#2927

and not here , sorry about that

I am following this doc
https://docs.fluentbit.io/manual/pipeline/outputs/cloudwatch

and I getting still the same error


[2021/06/29 17:36:00] [debug] [http_client] server logs.us-east-1.amazonaws.com:443 will close connection #54                                                                                                                                                                                                              │
│ [2021/06/29 17:36:00] [debug] [aws_client] logs.us-east-1.amazonaws.com: http_do=0, HTTP Status: 400                                                                                                                                                                                                                       │
│ [2021/06/29 17:36:00] [debug] [output:cloudwatch_logs:cloudwatch_logs.1] CreateLogGroup http status=400                                                                                                                                                                                                                    │
│ [2021/06/29 17:36:00] [error] [output:cloudwatch_logs:cloudwatch_logs.1] CreateLogGroup API responded with error='SerializationException'                                                                                                                                                                                  │
│ [2021/06/29 17:36:00] [error] [output:cloudwatch_logs:cloudwatch_logs.1] Failed to create log group                                                                                                                                                                                                                        │
│ [2021/06/29 17:36:00] [debug] [socket] could not validate socket status for #54 (don't worry)                                                                                                                                                                                                                              │
│ [2021/06/29 17:36:00] [debug] [out coro] cb_destroy coro_id=80                                                                                                                                                                                                                                                             │
│ [2021/06/29 17:36:00] [debug] [task] task_id=1 reached retry-attempts limit 1/1                                                                                                                                                                                                                                            │
│ [2021/06/29 17:36:00] [ warn] [engine] chunk '1-1624988154.122829157.flb' cannot be retried: task_id=1, input=systemd.1 > output=cloudwatch_logs.1                                                                                                                                                                         │
│ [2021/06/29 17:36:00] [debug] [task] destroy task=0x7fb5b3437a00 (task_id=1)

My config

config:
  service: |
    [SERVICE]
        Flush 1
        Daemon Off
        Log_Level debug
        Parsers_File parsers.conf
        HTTP_Server On
        HTTP_Listen 0.0.0.0
        HTTP_Port {{ .Values.service.port }}

  ## https://docs.fluentbit.io/manual/pipeline/inputs
  inputs: |
    [INPUT]
        Name tail
        Path /var/log/containers/*.log
        Parser docker
        Tag kube.*
        Mem_Buf_Limit 5MB
        Skip_Long_Lines On

    [INPUT]
        Name systemd
        Tag host.*
        Systemd_Filter _SYSTEMD_UNIT=kubelet.service
        Read_From_Tail On
    

  ## https://docs.fluentbit.io/manual/pipeline/filters
  filters: |
    [FILTER]
        Name kubernetes
        Match kube.*
        Merge_Log On
        Keep_Log Off
        K8S-Logging.Parser On
        K8S-Logging.Exclude On

  ## https://docs.fluentbit.io/manual/pipeline/outputs
  outputs: |
    [OUTPUT]
        Name cloudwatch_logs
        Match kube.*
        region ${REGION}
        log_group_name "/aws/containerinsights/${CLUSTER_NAME}/dataplane"
        log_stream_name from-fluent-bit-
        auto_create_group true
        

    [OUTPUT]
        Name cloudwatch_logs
        Match host.*
        region ${REGION}
        log_group_name "/aws/containerinsights/${CLUSTER_NAME}/host"
        log_stream_name from-fluent-bit-
        auto_create_group true

  ## https://docs.fluentbit.io/manual/pipeline/parsers
  customParsers: |
    [PARSER]
        Name docker_no_time
        Format json
        Time_Keep Off
        Time_Key time
        Time_Format %Y-%m-%dT%H:%M:%S.%L

from aws-for-fluent-bit.

acutchin-bitpusher avatar acutchin-bitpusher commented on August 26, 2024

In case someone comes here looking for examples, this config worked for my client, assigning each namespace to a log group, and every container with the same name in that namespace to a single log stream:

    [INPUT]
        Name                tail
        Tag                 kube.*
        Path                /var/log/containers/*.log
        DB                  /var/log/flb_kube.db
        Parser              docker
        Mem_Buf_Limit       5MB
        Skip_Long_Lines     On
        Refresh_Interval    5
    [FILTER]
        Name                kubernetes
        Match               kube.*
        Kube_URL            https://kubernetes.default.svc.cluster.local:443
        Merge_Log           On
        Merge_Log_Key       data
        K8S-Logging.Parser  On
        K8S-Logging.Exclude On
    [OUTPUT]
        Name                cloudwatch
        Match               *
        region              {{ .Values.cloudWatch.region }}
        {{- /* HERE YOU CAN CUSTOMIZE THE LOG GROUP AND LOG STREAM NAMES AS REQUIRED, USING HELM AND K8S VARIABLES */}}
        log_group_name      insert_cluster_name_here-$(kubernetes['namespace_name'])
        log_stream_name     $(kubernetes['container_name'])
        auto_create_group   true

from aws-for-fluent-bit.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.