Git Product home page Git Product logo

argocd-operator's Introduction

Argo CD Operator

Build Status Go Report Card Documentation Status Contributor Covenant

A Kubernetes operator for managing Argo CD clusters.

Documentation

See the documentation for installation and usage of the operator.

E2E testing

E2E tests are written using KUTTL. Please Install KUTTL to run the tests.

Note that the e2e tests for Redis HA mode require a cluster with at least three worker nodes. A local three-worker node cluster can be created using k3d

License

The Argo CD Operator is released under the Apache 2.0 license. See the LICENSE file for details.

argocd-operator's People

Contributors

anandf avatar anandrkskd avatar bigkevmcd avatar chengfang avatar chetan-rns avatar ciiay avatar dependabot[bot] avatar disposab1e avatar iam-veeramalla avatar ishitasequeira avatar jaideepr97 avatar jannfis avatar jgwest avatar jomkz avatar jopit avatar jparsai avatar mortya avatar pixeljonas avatar reginapizza avatar rishabh625 avatar rizwana777 avatar rjeczkow avatar sabre1041 avatar saumeya avatar sbose78 avatar shubhama19 avatar shubhamagarwal19 avatar svghadi avatar tylerauerbeck avatar wtam2018 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

argocd-operator's Issues

Can't override hostname for route

due to #54 I renamed my ArgoCD instance from "argocd" to "argocdinstance". This then causes the default route hostname to change from argocd-server-argocd.apps.my.cluster to argocdinstance-server-argocd.apps.my.cluster. According to the docs, it should be possible to set the route hostname via spec.server.host, but this doesn't seem to impact the hostname of the exposed route.

Several Dashboard Metrics Not Working

There are several items on the Argo CD dashboard in Grafana that are not working. Some items are using deprecated metrics and need to be updated.

Lauching application using an SSH repository

Hello there,

I'm having an issue with the operator on Openshift 4.3 using the operator hub for installation.
Even though I can connect to an ssh repository in the repositories settings, when I try and create and application that uses that same repo I get the following error:

Unable to create application: application spec is invalid: InvalidSpecError: Unable to get app details: rpc error: code = Internal desc = Failed to fetch git repo: `git fetch origin --tags --force` failed exit status 128: No user exists for uid 1000540000 fatal: Could not read from remote repository. Please make sure you have the correct access rights and the repository exists.

When trying to debug the issue, when I access the argocd-repo-server pod the "default" user is not present and I get greated with I have no name!@argocd-repo-server-74757c9cb-hsc8c:/home/argocd$

Is there a problem with my particular installation or is it a known problem?

Thanks

ArgoCDExport getting InvalidImageName on pod creation

Hey there!

First of all thank you for all the hard work on the operator!

I'm having problems with the ArgoCDExport on v0.0.5. I'm on Openshift 4.3 on aws.

The operator creates the PVC and the Job, but I'm getting an error on the pod creation.

I get Error: InvalidImageName error and upon further debugging I get:

Failed to apply default image tag "argoproj/argocd:sha256:f7a4a8e4542ef9d2e0cb6d3fe5814e87b79b8064089c0bc29ae7cefae8e93b66": couldn't parse image reference "argoproj/argocd:sha256:f7a4a8e4542ef9d2e0cb6d3fe5814e87b79b8064089c0bc29ae7cefae8e93b66": invalid reference format

I've tried pulling v1.4.2 instead, but I then get /bin/bash: /backups/argocd-backup.yaml: Permission denied

Any ideia what it could be?

Thanks!

Add the ability to define content to be inserted into the argocd-cm configmap

A lot of additional configuration is handled within the argocd-cm configmap. Is there currently a way to define this as part of the request to spin up an ArgoCD instance? I don't see anything that pops out after a quick scan of the docs -- but is this currently possible with this operator? If not, is this something that this operator would want to do in the future? Happy to help contribute some of this if it would be accepted.

Add an option to give the argocd-application-controller cluster-admin (or similar) rights.

It would be nice to have the ability to grant Argo CD "cluster-admin" capabilities (or something similar) to allow for more cluster configuration to be controlled by Argo CD.

For example, with Argo CD 0.0.3 I had altered the argocd-application-controller role to allow Argo CD to:

  • Create namespaces
  • CRUD on secrets, even in openshift- namespaces.
  • Update scc's.

This was nice because I could have almost my entire cluster configuration backed up in git, and modified through pull request.

The ability to control the service accounts that get the anyuid scc was nice as well. The version of Bitnami Sealed Secrets that I'm using on the cluster requires a service account with anyuid. I was able to have this scc yaml file in git managed by Argo CD, so if I created a new cluster, I didn't have to remember to add that scc to the service account.

Also, it was nice to have Argo CD manage my OAuth config. When I have a new cluster, creating the "cluster config" project and application was enough to have Htpasswd and Github auth applied to my cluster.

I also like the idea of restricting resources/verbs on a per-project basis.

I'm not very familiar with the workings of the operator, but my naive suggestion would be an additional flag on the argocd CRD to grant cluster-admin by default.

Thanks.

Issues with openshift installation

In the openshift installation there are couple of things i am seeing issues. I have Openshift V4 -

  1. The openshift install document says - oc create -f deploy/crds/argoproj_v1alpha1_argocd_crd.yaml however file argoproj_v1alpha1_argocd_crd.yaml doesnt exist.
  2. The operator install is erroring out with below message -
  3. The operator install is erroring out with below message -
    oc logs argocd-operator-769676587-l6nwl
    ....
    ....
    {"level":"info","ts":1581389982.2209306,"logger":"cmd","msg":"Registering Components."}
    {"level":"info","ts":1581389982.2213113,"logger":"controller-runtime.controller","msg":"Starting EventSource","controller":"argocd-controller","source":"kind source: /, Kind="}
    {"level":"error","ts":1581389985.7235928,"logger":"controller-runtime.source","msg":"if kind is a CRD, it should be installed before calling Start","kind":"ArgoCD.argoproj.io","error":"no matches for kind "ArgoCD" in version "argoproj.io/v1alpha1"","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/home/john/go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:128\nsigs.k8s.io/controller-runtime/pkg/source.(*Kind).Start\n\t/home/john/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/source/source.go:88\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Watch\n\t/home/john/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:122\ngithub.com/argoproj-labs/argocd-operator/pkg/controller/argocd.watchResources\n\targocd-operator/pkg/controller/argocd/util.go:400\ngithub.com/argoproj-labs/argocd-operator/pkg/controller/argocd.add\n\targocd-operator/pkg/controller/argocd/argocd_controller.go:58\ngithub.com/argoproj-labs/argocd-operator/pkg/controller/argocd.Add\n\targocd-operator/pkg/controller/argocd/argocd_controller.go:46\ngithub.com/argoproj-labs/argocd-operator/pkg/controller.AddToManager\n\targocd-operator/pkg/controller/controller.go:27\nmain.main\n\targocd-operator/cmd/manager/main.go:152\nruntime.main\n\t/usr/local/go/src/runtime/proc.go:203"}
    {"level":"error","ts":1581389985.7236865,"logger":"cmd","msg":"","error":"no matches for kind "ArgoCD" in version "argoproj.io/v1alpha1"","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/home/john/go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:128\nmain.main\n\targocd-operator/cmd/manager/main.go:153\nruntime.main\n\t/usr/local/go/src/runtime/proc.go:203"}

Please let me know if any additional info is needed,

Ingress annotations getting ignored

While trying to configure ArgoCD resource with this operator I've got an problem when "annotations" for ingress would be ignored and resulting ingress gets "nginx" ingress class.
Maybe I'm doing something wrong(there is no example how this annotation should be written)

apiVersion: argoproj.io/v1alpha1
kind: ArgoCD
metadata:
  name: argocd
  labels:
    env: devenv
spec:
  ingress:
    enabled: true
    path: /
    annotations:
       kubernetes.io/ingress.class: nginx-yandex-main

Add the ability to customize the OpenShift route that is created

Currently the operator provides the ability to customize the Ingress object that gets created, however right now the route setting in the operator is pretty vanilla. It would nice to be able to adjust some of the specifics of the Route object that gets created in a similar manner: InsecureEdgeTerminationPolicy, host, etc.

default admin password issue

To view it, i use : of course it's bcrypted
kubectl get secret -n argocd argocd-secret -o json | jq '.data|to_entries|map({key, value:.value|@base64d})|from_entries'

Tried the pod name as password for user admin, ie. "snapcore-argocd-server-66f8db6487-7m8tx"
Tried to force the password with
kubectl patch secret -n argocd argocd-secret -p '{"stringData": { "admin.password": "'$(htpasswd -bnBC 10 "" newpassword | tr -d ':\n')'"}}'

then restarted the server pod, still getting an "Invalid username or password"

running out of ideas ;-)

Invalid Credentials Generated

On a fresh install using the olm we cannot access argo via the ui or cli. All attempts respond with incorrect credentials. Checking the events and watching as the deployments come up we are positive this is the first container and not a recreated container. So, following this guide https://argoproj.github.io/argo-cd/faq/#i-forgot-the-admin-password-how-do-i-reset-it, we reset the password. No change. So, we also try deleting the admin.password and admin.passwordMtime as noted in the guide. This causes a new container to be create and the secret update with new credentials. We try logging in again with both the old pod and new pod name to no avail. We even tried to use the terminal in the pod to login thus ensuring the ingress was doing something weird. Again not success. Any assistance would be much appreciated.

Add support for setting OIDC redirect_uri in config

Description

Add option to set the host name of the argocd server in the OIDC config.

Currently this defaults to argocd-server which does not resolve correctly so SSO servers throw errors or redirect to a host that can not be resolved.

Expected Behavior

Have an option to set this host name in the OIDC config or have the operator use the server > host value set in the server config.

Actual Behavior

The redirect_uri hostname is set to argocd-server:

https://keycloak.example.com/auth/realms/myrealm/protocol/openid-connect/auth?client_id=argocd&redirect_uri=https%3A%2F%2Fargocd-server%2Fauth%2Fcallback&response_type=code&scope=openid+profile+email+groups&state=jETgjbecfO

Environment

  • Operating system: Fedora 32.20200512.0 (IoT Edition)
  • k8s version: v1.17.2+k3s1
Client Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.2", GitCommit:"59603c6e503c87169aea6106f57b9f242f64df89", GitTreeState:"clean", BuildDate:"2020-01-18T23:30:10Z", GoVersion:"go1.13.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.2+k3s1", GitCommit:"cdab19b09a84389ffbf57bebd33871c60b1d6b28", GitTreeState:"clean", BuildDate:"2020-01-27T18:09:26Z", GoVersion:"go1.13.6", Compiler:"gc", Platform:"linux/amd64"}
  • Project Version/Tag: v0.0.8
  • SSO server: Keycloak 9.0.2

Steps to reproduce

  • Install the argocd operator using olm method
  • Install a SSO server (in my testing keycloak)
  • Setup a client for Argocd on the SSO server
  • Create an argocd instance based on the example cr with an oidcConfig in the spec
  • Select LOGIN VIA KEYCLOAK in the web interface
  • Redirected to the SSO server and get error Invalid parameter: redirect_uri

If you set up the SSO to accept any Redirect URIs you get redirected to the argocd-server hostname.

Operator Hub out of date

Howdy! It looks like the operator bits in Operator Hub are out of date, as attempting to install from there will result in the a Forbidden error when accessing argocds/status. Looks like you just added this and it just hasn't been bubbled up yet?

Update CRD Properties After Creation

The operator is not watching for updates to all of the properties on the CRD. Once the CRD has been created, making changes to the ResourceCustomizations property for example, has no effect and the argocd-cm ConfigMap is not updated as expected.

Additonal steps when installing on OpenShift 4.X

When installing the Operator on Operatorhub you cannot use the upstream OLM but should use the OpenShift provided one.

Steps below worked for me to install argocd operator on operatorhub.io on an OCP 4 cluster using pre-installed olm

oc create ns argocd
cat <<EOF | oc create -f -
apiVersion: operators.coreos.com/v1alpha1
kind: CatalogSource
metadata:
name: operatorhubio-catalog
namespace: openshift-marketplace
spec:
displayName: OperatorHub.io Operators
publisher: OperatorHub.io
sourceType: grpc
image: quay.io/operatorframework/upstream-community-operators:latest
EOF
cat <<EOF | oc create -f -
apiVersion: operators.coreos.com/v1
kind: OperatorGroup
metadata:
name: operatorhub-io-operators
namespace: argocd
spec:
targetNamespaces:

Need Documentaion for Export Process

There needs to be documentation added that explains in detail how to perform backup/restore operations using the ArgoCDExport resource that includes examples for both supported storage locations (local persistent volume and off-site AWS S3 bucket).

Something is wrong with password passing

I am trying to create a new instance with this config:

apiVersion: argoproj.io/v1alpha1
kind: ArgoCD
metadata:
  name: argocd-main
  namespace: argocd
  labels:
    env: prod
spec:
  image: argoproj/argocd
  version: v1.4.2
  ha:
    enabled: false
  server:
    autoscale:
      enabled: false
    grpc:
      host: grpcsome.domain
      ingress: true
    host: some.domain
    ingress: true
    insecure: true
    service:
      type: ClusterIP
  redis:
    image: redis
    resources: {}
    version: 5-alpine
  statusBadgeEnabled: true
  usersAnonymousEnabled: false
  grafana:
    enabled: false
  prometheus:
    enabled: false

It creates everything and there is a pod named argocd-main-server-765b55b765-pv6w7
When I try to use it as a password to "admin" account it fails to log me in.
I even tried to patch a secret with password with something as simple as 12345 in bcrypt hash, yet it won't work for some reason. I think it may be related to #29 issue as OLM automatically installs 0.0.6... Maybe it's time to create bet branch for semi-stable versions? =)

Verify Namespace Required for ClusterRoleBinding

I think that namespace in deploy/role_binding.yaml can be removed. OLM handles the role bindings for the operator and on my OpenShift cluster, the ClusterRoleBinding does not have a namespace associated.

RBAC issues with 0.0.5

I've tried setting up ArgoCD with this operator (both installed with OLM and manually)
And now when I create an Application I am getting this error:
secrets is forbidden: User "system:serviceaccount:argocd:argocd-application-controller" cannot create resource "secrets" in API group "" in the namespace "clickhouse"

Application yaml:

apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: clickhouse
  namespace: argocd
spec:
  project: default
  source:
    repoURL: [email protected]:some/path/gitops.git
    targetRevision: HEAD
    path: service/overlays/production/clickhouse
  destination:
    server: https://kubernetes.default.svc
    namespace: clickhouse
  syncPolicy:
    validate: true

Upgrade Operator SDK Dependency

Currently the operator depends on v0.12.0 of the operator-sdk. The operator needs to be updated to use the current version of the operator-sdk to prevent getting too far behind, resulting in a more challenging upgrade in the future.

Need Finer Control Over Ingress and Routes

The ability is needed to selectively toggle ingress and routes independently for each exposed resource.

  • Argo CD Server
  • Grafana
  • Prometheus

Specifically, the Prometheus component may not need to be exposed with an Ingress or Route.

Keeps rewriting argocd-ssh-known-hosts-cm

When I apply my own configmap with SSH known hosts it stays for several minutes and then gets re-written wit the default one. Is there any way to stop this? According to docs, I'm trying to do it the way it was supposed to...

Create argocd password

For folks who won't be using an outside authentication system, being able to create the initial argocd-server password would be beneficial (versus having to go through the process of grabbing the podname). Would this by something that could be added to the operator?

Redis restarting issue

After upgrading to v0.0.5 I am having a problem with Redis multiple restarts. Tried re-creating an empty namespace and using the most basic spec from examples, the result is always the same. It spawns multiple redis containers which terminate very rapidly. Describing them would return:

Pod 'example-argocd-redis-7667b47db5-l4wnq': error 'pods "example-argocd-redis-7667b47db5-l4wnq" not found', but found events.
Events:
  Type     Reason     Age   From                                Message
  ----     ------     ----  ----                                -------
  Normal   Scheduled  9s    default-scheduler                   Successfully assigned argocd/example-argocd-redis-7667b47db5-l4wnq to cl1pjlk884gsikfpe164-emev
  Normal   Pulling    8s    kubelet, cl1pjlk884gsikfpe164-emev  Pulling image "redis:5.0.3"
  Normal   Pulled     6s    kubelet, cl1pjlk884gsikfpe164-emev  Successfully pulled image "redis:5.0.3"
  Warning  Failed     6s    kubelet, cl1pjlk884gsikfpe164-emev  Error: cannot find volume "default-token-9jf8s" to mount into container "redis"

ArgoCD doesn't work in a disconnected environment

Hi,

I've been testing ArgoCD operator version 0.0.8 and it seems the ArgoCD controller and server are deployed fine in a disconnected environment in OpenShift (using cluster-wide proxy). But when I tried to deploy an Application, the server cannot reach an external repository like Github.

Would be possible to add the cluster-wide proxy env vars to server ? Or any method so we can add custom environment variables ?

Environment:

  • Openshift 4.3
  • ArgoCD operator 0.08
  • ArgoCD 1.5.2

Handle Argo CD Component Upgrades

The operator should support the ability to upgrade one or more of the components that make up an Argo CD cluster. For example, upgrade the version of Redis server. Currently, the version can be set on the ArgoCD CR but the operator does not support changing this value after the cluster has been deployed. Ideally, the operator would notice the version change and update the component.

initialSSHKnownHosts overrides default SSHKnownHosts provided by ArgoCD

I believe this is currently working as designed, but I'd like to see if anyone has similar thoughts. Right now upstream ArgoCD provides the initial SSHKnownHosts for github, bitbucket, gitlab, etc. when you install the system. The current functionality of this operator overrides/removes them if you provide your own initialSSHKnownHosts to the operator. What I would like to see is something along the lines of:

  • inject the default hosts as the upstream does
  • append my custom list to these default hosts so that I don't have to "carry' along the default ones myself
  • provide a key that does allow users to remove those default hosts in the case that they don't want them included

I believe this would look something like:

initialSSHKnownHosts:
  default-hosts: false
  keys: |
    bitbucket.org ssh-rsa AAAAB3NzaC...
    github.com ssh-rsa AAAAB3NzaC...

In this case, the "load" is much lighter for folks who want to opt out versus those who want to also include the default hosts (i.e. a single key versus numerous known hosts entries.

argocd-operator Service Account Missing RBAC for v1.HorizontalPodAutoscaler when installing via OLM

The argocd-operator Service Account Missing is missing RBAC for v1.HorizontalPodAutoscaler when installing via Operator Lifecycle Manager ( OLM ). An initial install works ( I believe ), but subsequent reconciles are blocked from the error. I think this has been added on master https://github.com/argoproj-labs/argocd-operator/blob/master/deploy/role.yaml#L50 , and it's just a version skew with what's deployed for OLM.

I followed the directions from the docs site for installing the argocd operator, as well as OLM ( 0.14.1 )

You can add a new role or patch an existing role bound to the SA to resolve the issue.

kind: Role
metadata:
  name: argocd-operator-hack
rules:
  - apiGroups:
      - autoscaling
    resources:
      - horizontalpodautoscalers
    verbs:
      - "*"
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: argocd-operator-hack
subjects:
  - kind: ServiceAccount
    name: argocd-operator
roleRef:
  kind: Role
  name: argocd-operator-hack
  apiGroup: rbac.authorization.k8s.io

Here is the log message below from the k logs -f argocd-operator-cbd8b45cf-wxstt -n argocd

E0217 23:54:22.476979 1 reflector.go:125] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98: Failed to list *v1.HorizontalPodAutoscaler: horizontalpodautoscalers.autoscaling is forbidden: User "system:serviceaccount:argocd:argocd-operator" cannot list resource "horizontalpodautoscalers" in API group "autoscaling" in the namespace "argocd"

And once you apply the hack for the service account

1 reflector.go:125] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98: Failed to list *v1.HorizontalPodAutoscaler: horizontalpodautoscalers.autoscaling is forbidden: User "system:serviceaccount:argocd:argocd-operator" cannot list resource "horizontalpodautoscalers" in API group "autoscaling" in the namespace "argocd"
{"level":"info","ts":1581985470.2491558,"logger":"controller_argocd","msg":"reconciling ingresses"}
{"level":"info","ts":1581985470.2693412,"logger":"controller_argocd","msg":"Reconciling ArgoCD","namespace":"argocd","name":"example-argocd"}
{"level":"info","ts":1581985470.2694004,"logger":"controller_argocd","msg":"reconciling service accounts"}
{"level":"info","ts":1581985470.2694175,"logger":"controller_argocd","msg":"reconciling certificate authority"}
{"level":"info","ts":1581985470.2699878,"logger":"controller_argocd","msg":"reconciling CA secret"}
{"level":"info","ts":1581985470.2700505,"logger":"controller_argocd","msg":"reconciling CA config map"}
{"level":"info","ts":1581985470.2704508,"logger":"controller_argocd","msg":"reconciling secrets"}

System Info:
minikube version: v1.7.2
commit: 50d543b5fcb0e1c0d7c27b1398a9a9790df09dfb

Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.5", GitCommit:"20c265fef0741dd71a66480e35bd69f18351daea", GitTreeState:"clean", BuildDate:"2019-10-15T19:16:51Z", GoVersion:"go1.12.10", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.2", GitCommit:"59603c6e503c87169aea6106f57b9f242f64df89", GitTreeState:"clean", BuildDate:"2020-01-18T23:22:30Z", GoVersion:"go1.13.5", Compiler:"gc", Platform:"linux/amd64"}

Cannot add repo with argocd repo add command

When I run the following command to add a repo

argocd repo add https://github.com/christianh814/gitops-examples

It does not actually add it. It seems like it's getting overwritten by the operator.

When I add a repo via the operator...

apiVersion: argoproj.io/v1alpha1
kind: ArgoCD
metadata:
  name: example-argocd
spec:
  repositories: |
    - url: https://github.com/christianh814/another-repo

It'll add the repo...but I still can't add additional repos via the CLI.

I would like to be able to add additional repos, not only ones managed by the operator.

ArgoCD CR Should List Status for All Argo CD Components

Currently, the operator only populates a Status value for the Argo CD server component. In addition, the value for the Status Phase is only computed currently based on the value of the server component Status currently.

The operator should list the status for each of the following Argo CD components and use each of these status values in the final determination for the Status Phase value.

  • Application Controller
  • Dex
  • Redis
  • Repo Server
  • Server

Document Kustomize Deployment

We added kustomize to the options for deploying this operator but need to go back and update the documentation to reflect that as well. This is a placeholder to remind me to get that updated.

OpenShift OAuth Broken After Cluster Restart

It appears that the Dex pod must be deleted/recreated when restarting the entire OpenShift cluster in order for OAuth to function correctly after the restart. Need to investigate if there is something that the operator can do to help prevent the need to recreate the Dex pod in this scenario.

Ability to utilize repository.credentials in initialRepositories

Right now when you add a repository via initialRepositories we have to specify credentials or keys for each individual repository that we would like to use. As of ArgoCD v1.4.X, we have the ability to specify a key to be used at a root of a repo (i.e org/project) level with repository.credentials. Not sure what the design of this would look like at this point, but I think it would be beneficial to be able to support both.

Happy to help add functionality here, but wanted to get others input here first.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.