Git Product home page Git Product logo

argo-kube-notifier's People

Contributors

4tt1l4 avatar chap-dr avatar dependabot[bot] avatar dtaniwaki avatar joehcq1 avatar ryanmaclean avatar sarabala1979 avatar yutachaos avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

argo-kube-notifier's Issues

Host docker image under argoproj instead of personal user

Is your feature request related to a problem? Please describe.
As a new user, I am generally more comfortable pulling from an official docker repo than from an individual's repo. If the image were hosted by argoproj, it'd appear safer. Don't know how hard this would be for someone in the argo group.

Describe the solution you'd like
argo-kube-notifier docker image hosted by argoproj

Additional context
This could be a good thing to change at the same time as CR #11

Currently hosted @ https://hub.docker.com/r/sarabala1979/argo-kube-notifier

Changes needed in these locations:

Deleted notifications being send by kube-notifier-controller

Describe the bug
When a notification is deleted it's not reflected in "kube-notifier-controller" - resulting in notifications being send after deletion.

To Reproduce

  1. Create notification. E.g
    kubectl apply -f hello-notifier.yaml -n hello-app
    .. trigger notification condition to check a notification is send.

  2. Delete notification:
    kubectl delete notifications.argoproj.io hello-notifier -n hello-app

  3. Trigger notification condition again, resulting in notification is being send again, even if it's been deleted.

Expected behavior
When a given notification is deleted, kube-notifier-controller should not send any notifications of given kind.

Additional context
In order to update "kube-notifier-controller" with latest changes (notification deletion), i had to restart controller manually (by deleting pod)

Add SECURITY.md

The Argo maintainers recently agreed to require all Argoproj Labs project repositories to contain a SECURITY.md file which documents:

  • Contact information for reporting security vulnerabilities
  • Some minimal information about policies, practices, with possibly links to further documentation with more details

This will help direct vulnerability reporting to the right parties which can fix the issue.

You are free to use the following as examples/templates:

Also, please note that in the future we are exploring a requirement that argoproj-labs projects perform a CII self-assessment to better inform its users about which security best practices are being followed.

Update Docker Image & Fix Dockerfile as Required

Is your feature request related to a problem? Please describe.
I'm trying to build the current Dockerfile and it is failing. Without time to debug this I'm forced to use the current image which was pushed July 2019 and is many commits behind master.

Describe the solution you'd like
Update Dockerfile so it builds.

Describe alternatives you've considered
Figure it out myself.

Additional context
Note that the latest CI build failed (commit 87513f3).

Cloning this repo and running in root:

->docker build .
Sending build context to Docker daemon  63.37MB
Step 1/12 : FROM golang:1.10.3 as builder
 ---> d0e7a411e3da
Step 2/12 : WORKDIR /go/src/github.com/argoproj-labs/argo-kube-notifier
 ---> Using cache
 ---> 4f4f1fc132d9
Step 3/12 : COPY pkg/    pkg/
 ---> Using cache
 ---> 3c0fba13f106
Step 4/12 : COPY cmd/    cmd/
 ---> Using cache
 ---> a7ca518c0b2a
Step 5/12 : COPY vendor/ vendor/
COPY failed: stat /var/snap/docker/common/var-lib-docker/tmp/docker-builder190568069/vendor: no such file or directory

Segfault on notification creation

Describe the bug

I got the following error log from the controller when I created a notification resource.

{"level":"info","ts":1566366594.654175,"logger":"entrypoint","msg":"setting up client for manager"}
{"level":"info","ts":1566366594.6545167,"logger":"entrypoint","msg":"setting up manager"}
{"level":"info","ts":1566366594.949193,"logger":"entrypoint","msg":"Registering Components."}
{"level":"info","ts":1566366594.9494047,"logger":"entrypoint","msg":"setting up scheme"}
{"level":"info","ts":1566366594.949644,"logger":"entrypoint","msg":"Setting up controller"}
{"level":"info","ts":1566366594.9497604,"logger":"kubebuilder.controller","msg":"Starting EventSource","controller":"notification-controller","source":"kind source: /, Kind="}
{"level":"info","ts":1566366594.9498649,"logger":"entrypoint","msg":"setting up webhooks"}
{"level":"info","ts":1566366594.9499733,"logger":"entrypoint","msg":"Starting the Cmd."}
{"level":"info","ts":1566366595.0514185,"logger":"kubebuilder.controller","msg":"Starting Controller","controller":"notification-controller"}
{"level":"info","ts":1566366595.151721,"logger":"kubebuilder.controller","msg":"Starting workers","controller":"notification-controller","worker count":1}
1059609
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x18 pc=0xfced74]

goroutine 178 [running]:
github.com/argoproj-labs/argo-kube-notifier/notification/controller.(*Watcher).runWatch(0xc42001ac80)
        /go/src/github.com/argoproj-labs/argo-kube-notifier/notification/controller/Watcher.go:68 +0x34
github.com/argoproj-labs/argo-kube-notifier/notification/controller.(*Watcher).watch(0xc42001ac80)
        /go/src/github.com/argoproj-labs/argo-kube-notifier/notification/controller/Watcher.go:38 +0x45
created by github.com/argoproj-labs/argo-kube-notifier/notification/controller.(*NewNotificationController).RegisterNotification
        /go/src/github.com/argoproj-labs/argo-kube-notifier/notification/controller/controller.go:85 +0x5b9

I installed the revision of 47c5a859d0f3a72fe938138a357c879521aa474d.

To Reproduce

I tried the following manifest. I masked sensitive data.

apiVersion: argoproj.io/v1alpha1
kind: Notification
metadata:
  name: notification-test
spec:
  monitorResource:
    Group:    argoproj.io
    Resource: workflows
    Version:  alphav1
  namespace: default
  notifiers:
  - name: slack
    slack:
      tokenSecret:
        name: my-slack-secret
        key: token
      hookUrlSecret:
        name: my-slack-secret
        key: hookURL
      channel: ***
  rules:
  - name: wf
    anyconditions:
    - jsonpath: "status/phase"
      operator: "eq"
      value: "Failed"
    - jsonpath: "status/phase"
      operator: "eq"
      value: "Error"
    events:
    - message: "Workflow ={{.metadata.name}} Failed."
      notificationLevel: critical
      notifierNames:
      - slack
    initialDelaySec: 10
    throttleMintues: 1
---
apiVersion: v1
kind: Secret
metadata:
  name: my-slack-secret
  namespace: default
type: Opaque
data:
  hookURL: ***
  token: ***

Expected behavior
It should not fail with the segfault.

Screenshots
N/A

Desktop (please complete the following information):
N/A

Smartphone (please complete the following information):
N/A

Additional context
N/A

Allow to update secrets

Is your feature request related to a problem? Please describe.

I updated the slack URL after I tried several notifications with my old incoming webhook URL, but the operator didn't take new one until I delete the pod and let it restarted.

Describe the solution you'd like

We're caching the client once it gets created.

notifierMap[notifier.Name] = integration.NewSlackClient(client, namespace, notifier.Slack)

We need a mechanism to sync secrets and recreate slack clients.

Describe alternatives you've considered

Just delete the operator pod when a secret gets updated although it doesn't scale at all.

Additional context

N/A

Notification in one namespace watches ressources in other namespaces

Describe the bug
Notification controller checks ressources in all namespaces - and not only the namespace where notification is applied.

To Reproduce

  1. Have two apps in two different namespaces "foo" and "bar".
  2. Apply a notification to namespace "foo"

kubectl apply -f example/foo-notifier.yaml -n foo

apiVersion: argoproj.io/v1alpha1
kind: Notification
metadata:
  name: "foo-notifier"
  namespace: foo
spec:
  Namespace: foo
  monitorResource:
    Resource: pods
    Version: v1
  rules:
  - allConditions:
    - jsonPath: metadata/name
      operator: ne
      value: badName
    events:
        -
          message: |
            Test notification from namespace: {{ .metadata.namespace }}
          notificationLevel: info
          notifierNames:
            - slack
    name: "test-notification"
    initialDelaySec: 1
    throttleMinutes: 10
  notifiers:
  - name: slack
    slack:
      channel: placeholder
      hookUrlSecret:
        key: hookURL
        name: webhook-server-secret

If initialDelaySec > 0 - a warning in kube-notifier-controller-0 is logged and no message is send

kubectl logs -f  kube-notifier-controller-0

time="2020-05-18T07:19:47Z" level=warning msg="Error occured getting resource. pods \"bar-sjshv\" not found"

If initialDelaySec = 0 - the notification is send :
"Test notification from namespace: bar"

Expected behavior
The controller would only watch ressources in the namespace where notification is applied. In the above example, ressources in namespace foo, but not bar

/etc/ssl/cert mounted from AWS EKS returns in x509 error when trying to POST

Describe the bug
/etc/ssl/cert folder mounted from host on AWS EKS

actual error: "x509: failed to load system roots and no roots provided

To Reproduce
Steps to reproduce the behavior:

  1. Spin up application with SS on AWS EKS
  2. Apply dummy rule to always notify

Expected behavior
/etc/ssl/certs from any host should work, removing the hostmount for /etc/ssl/certs and apt installing ca-certificates allows for a successful CURL to test the webhook integration.

"throttleMinutes" parameter does not seem to have any effect

Describe the bug
The "throttleMinutes" parameter does not seem to have any effect. A notification is send every time condition is met (once every minute or so), even though throttleMinutes is set to e.g 10 minutes.

To Reproduce
Steps to reproduce the behavior:
Apply notification where condition will always be true, with throttleMinutes: 10 - Example:

apiVersion: argoproj.io/v1alpha1
kind: Notification
metadata:
  name: "dummy-notification"
  namespace: placeholder
spec:
  Namespace: placeholder
  monitorResource:
    Resource: pods
    Version: v1
  rules:
  - allConditions:
    - jsonPath: metadata/name
      operator: ne
      value: badName
    events:
        -
          message: |
            Dummy notification
          notificationLevel: info
          notifierNames:
            - slack
    name: "dummy-notification"
    initialDelaySec: 1
    throttleMinutes: 10
  notifiers:
  - name: slack
    slack:
      channel: placeholder
      hookUrlSecret:
        key: hookURL
        name: webhook-server-secret

Expected behavior
When condition is met, notification should be send once after 1 sec (initialDelaySec: 1), but only if no simular notification is send within last 10 min (throttleMinutes)

Screenshots

Desktop (please complete the following information):

Smartphone (please complete the following information):

Additional context

Unable to monitor container Restarts

Describe the bug

No alerts triggered when restartCount > 0.

To Reproduce
apiVersion: argoproj.io/v1alpha1
kind: Notification
metadata:
name: "dummy-notification"
namespace: dev
spec:
Namespace: dev
monitorResource:
Resource: pods
Version: v1
rules:
- allConditions:
- jsonPath: status/containerStatuses[0]/restartCount
operator: gt
value: "0"
events:
- message: "Condition Triggered : Pod ={{.metadata.name}} is being tested"
emailSubject: "[ALERT] Argo Notification Condition Triggered {{.metadata.name}}"
notificationLevel: info
notifierNames:
- slack
name: "dummy-notification"
initialDelaySec: 1
throttleMinutes: 1
notifiers:
- name: slack
slack:
channel: alerts
hookUrlSecret:
key: hookURL
name: my-slack-secret

Expected behavior

I expect a notification everytime a pod has had a restart greater than 0.

  • Latest image built locally from master.

test

Is your feature request related to a problem? Please describe.
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]

Describe the solution you'd like
A clear and concise description of what you want to happen.

Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.

Additional context
Add any other context or screenshots about the feature request here.

Continuously Deliver New Docker Image to DockerHub

Is your feature request related to a problem? Please describe.
As a new user, I was discouraged from using argo-kube-notifier when I discovered that the latest docker image is many months old.

Describe the solution you'd like
Continuously Deploy most recent commit to master to DockerHub

Describe alternatives you've considered
Making sure all merges into master can be built with a simple docker build . command with a checksuite would mitigate the impact of discovering the image is old.

Additional Context
PR #10 is the result of trying to get it to build.

ImagePullBackOff Error: cannot deploy the application

Describe the bug
Fails to start the pod due to image pull error

Failed to pull image "sarabala1979/argocd-kube-notifier:latest": rpc error: code = Unknown desc = failed  to pull and unpack image "docker.io/sarabala1979/argocd-kube-notifier:latest": failed to resolve reference "docker.io/sarabala1979/argocd-kube-notifier:latest": pull  access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed 

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.