Git Product home page Git Product logo

charts's Introduction

JFrog Helm Charts

This repository hosts the official JFrog Helm Charts for deploying JFrog products to Kubernetes

For older version please refer to https://github.com/jfrog/charts/tree/pre-unified-platform

Install Helm (only V3 is supported)

Get the latest Helm release.

Install Charts

Add JFrog Helm repository

Before installing JFrog helm charts, you need to add the JFrog helm repository to your helm client.

helm repo add jfrog https://charts.jfrog.io
helm repo update

Note: For instructions on how to install a chart follow instructions in its README.md.

Contributing to JFrog Charts

Fork the repo, make changes and then please run make lint to lint charts locally, and at least install the chart to see it is working. :)

On success make a pull request (PR) on to the master branch.

We will take this PR changes internally, review and test.

Upon successful review , someone will give the PR a LGTM (looks good to me) in the review thread.

We will add PR changes in upcoming releases and credit the contributor with PR link in changelog (and also closing the PR raised by contributor).

Linting charts locally

Note: Docker must be running on your Mac/Linux machine. The command will only lint changed charts.

To lint all charts:

make lint

Forcing to lint unchanged charts

Note: Chart version bump check will be ignored.

You can force to lint one chart with --charts flag:

make lint -- --charts stable/artifactory

You can force to lint a list of charts (separated by comma) with --charts flag:

make lint -- --charts stable/artifactory,stable/xray

You can force to lint all charts with --all flag:

make lint -- --all

Manually testing charts with Docker for Mac Kubernetes Cluster

Note: Make sure 'Show system containers (advanced)' is enabled in Preferences/Kubernetes.

On the Mac you can install and test all changed charts in Docker for Mac:

make mac

Forcing to install unchanged charts

You can force to install one chart with --charts flag:

make mac -- --charts stable/artifactory

You can force to install a list of charts (separated by comma) with --charts flag:

make mac -- --charts stable/artifactory,stable/xray

You can force to install all charts with --all flag:

make mac -- --all

Note: It might take a while to run install test for all charts in Docker for Mac.

Manually testing charts with remote GKE cluster

You can install and test changed charts with GKE cluster set in kubeconfig context:

make gke

Forcing to install unchanged charts

You can force to install one chart with --charts flag:

make gke -- --charts stable/artifactory

You can force to install a list of charts (separated by comma) with --charts flag:

make gke -- --charts stable/artifactory,stable/xray

You can force to install all charts with --all flag:

make gke -- --all

Using dedicated GKE cluster for manual charts testing

By default it uses the GKE cluster set in kubeconfig context, you can specify the dedicated cluster (it must be set in the kubeconfig) in the file CLUSTER:

GKE_CLUSTER=gke_my_cluster_context_name

Then store the CLUSTER file in the root folder of the repo. It is also ignored by git.

In such setup your local default cluster can be different from the charts testing one.

Examples

For more detailed examples of each chart values, please refer examples.

Docs

For more information on using Helm, refer to the Helm's documentation.

To get a quick introduction to Charts see this Chart's documentation.

charts's People

Contributors

aaleksan20 avatar alexeyd-itsoft avatar amithins avatar anastasiagrinman avatar chukka avatar danielezer avatar dnewt avatar eldada avatar franpb90 avatar gitta-jfrog avatar jainishshah17 avatar jfrogprasanna avatar kierranm avatar kristinnardal2 avatar logeshwarsn avatar nimerb avatar nimerbb avatar peters95 avatar rahulsadanandan avatar rimusz avatar robbie-demuth avatar ronnyn avatar rpf3 avatar satheesh-balachandran avatar shahiinn avatar shahin-frog avatar sponte avatar valdisrigdon avatar vivekkumar-git avatar zendril avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

charts's Issues

Adding nodeSelector to distribution and xray charts

Is this a request for help?: yes


Is this a BUG REPORT or FEATURE REQUEST? (choose one): Feature Request

Version of Helm and Kubernetes: Helm 2.10.0 K8s 1.10

Which chart: Distribution and xray

What happened: Want node selector to specify the nodes in my my env

What you expected to happen: I want the pod to come up on specific node

How to reproduce it (as minimally and precisely as possible):

Anything else we need to know:

Distribution does not come up with default distribution chart due to invalid redis password

2018-09-28 18:01:15,926 [main] [WARN ] (c.j.b.d.c.DistributorApplicationContext:551) Exception encountered during context initialization - cancelling refresh attempt: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'foremanDistributorService': Invocation of init method failed; nested exception is redis.clients.jedis.exceptions.JedisDataException: ERR invalid password
Exception in thread "main" org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'foremanDistributorService': Invocation of init method failed; nested exception is redis.clients.jedis.exceptions.JedisDataException: ERR invalid password
at org.springframework.beans.factory.annotation.InitDestroyAnnotationBeanPostProcessor.postProcessBeforeInitialization(InitDestroyAnnotationBeanPostProcessor.java:137)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.applyBeanPostProcessorsBeforeInitialization(AbstractAutowireCapableBeanFactory.java:409)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1620)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:555)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:483)
at org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:306)
at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:230)
at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:302)
at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:197)
at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:761)
at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:867)
at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:543)
at com.jfrog.bintray.distribution.context.DistributorApplicationContext.create(DistributorApplicationContext.java:39)
at com.jfrog.bintray.distributor.DistributorRunner.(DistributorRunner.java:30)
at com.jfrog.bintray.distributor.DistributorRunner.main(DistributorRunner.java:36)
Caused by: redis.clients.jedis.exceptions.JedisDataException: ERR invalid password

The password for redis is randomly generated and distribution service does not correctly configure itself to use randomly generated password

Artifactory nodeSelector, affinity and tolerations seem to be indented wrongly

Is this a request for help?:
No

Is this a BUG REPORT or FEATURE REQUEST? (choose one):
BUG REPORT

Version of Helm and Kubernetes:
Helm
Client: &version.Version{SemVer:"v2.9.1", GitCommit:"20adb27c7c5868466912eebdf6664e7390ebe710", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.9.1", GitCommit:"20adb27c7c5868466912eebdf6664e7390ebe710", GitTreeState:"clean"}

Kubernetes
Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.1", GitCommit:"d4ab47518836c750f9949b9e0d387f20fb92260b", GitTreeState:"clean", BuildDate:"2018-04-12T14:26:04Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.5", GitCommit:"32ac1c9073b132b8ba18aa830f46b77dcceb0723", GitTreeState:"clean", BuildDate:"2018-06-21T11:34:22Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}

Which chart:
Artifactory

What happened:
Error: release artifactory-v6 failed: StatefulSet in version "v1beta2" cannot be handled as a StatefulSet: v1beta2.StatefulSet.Spec: v1beta2.StatefulSetSpec.Template: v1.PodTemplateSpec.Spec: v1.PodSpec.NodeSelector: ReadString: expects " or n, but found {, error found in #10 byte of ...|ssProbe":{"failureTh|..., bigger context ...|:"wait-for-db"}],"nodeSelector":{"livenessProbe":{"failureThreshold":10,"httpGet":{"path":"/artifact|...

What you expected to happen:
The installation would be successful and the nodeSelector, affinity or tolerations are set

How to reproduce it (as minimally and precisely as possible):
Try to install the artifactory chart while specifying a nodeSelector, affinity or tolerations

values.yaml:

artifactory:
  nodeSelector:
    type: "artifactory"

Anything else we need to know:
Compare


with
{{ toYaml .Values.distribution.resources | indent 10 }}

or
{{ toYaml .Values.distributor.resources | indent 10 }}

artifactory-ha nodes not getting license

Version of Helm and Kubernetes:

Helm

$ helm version
Client: &version.Version{SemVer:"v2.12.0", GitCommit:"d325d2a9c179b33af1a024cdb5a4472b6288016a", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.12.0", GitCommit:"d325d2a9c179b33af1a024cdb5a4472b6288016a", GitTreeState:"clean"}

Kubernetes 1.10

Which chart:

artifactory-ha

What happened:

Running the HA chart and getting the following when I try to open artifactory on my browser.

primary log

2018-12-19 06:11:30,247 [localhost-startStop-1] [JFrog-Access] [INFO ] (o.j.a.AccessApplication:597) - The following profiles are active: production,grpc
06:11:31.138 [localhost-startStop-2] DEBUG org.artifactory.addon.ConverterBlockerImpl - License status is: valid
06:11:31.138 [localhost-startStop-2] DEBUG org.artifactory.addon.ConverterBlockerImpl - Found a valid license, allowed to perform conversion

Node log

2018-12-19 06:34:34,609 [http-nio-8081-exec-8] [WARN ] (o.a.r.f.LicenseRestFilter:59) - License is not installed
2018-12-19 06:34:34,609 [http-nio-8081-exec-4] [WARN ] (o.a.r.f.LicenseRestFilter:59) - License is not installed
2018-12-19 06:34:34,609 [http-nio-8081-exec-6] [WARN ] (o.a.r.f.LicenseRestFilter:59) - License is not installed
2018-12-19 06:34:34,721 [http-nio-8081-exec-10] [WARN ] (o.a.r.f.LicenseRestFilter:59) - License is not installed
2018-12-19 06:34:35,004 [http-nio-8081-exec-1] [WARN ] (o.a.r.f.LicenseRestFilter:59) - License is not installed
2018-12-19 06:34:35,026 [http-nio-8081-exec-5] [WARN ] (o.a.r.f.LicenseRestFilter:59) - License is not installed
2018-12-19 06:34:35,026 [http-nio-8081-exec-7] [WARN ] (o.a.r.f.LicenseRestFilter:59) - License is not installed
2018-12-19 06:34:35,027 [http-nio-8081-exec-2] [WARN ] (o.a.r.f.LicenseRestFilter:59) - License is not installed
2018-12-19 06:34:35,033 [http-nio-8081-exec-9] [WARN ] (o.a.r.f.LicenseRestFilter:59) - License is not installed
2018-12-19 06:34:35,035 [http-nio-8081-exec-3] [WARN ] (o.a.r.f.LicenseRestFilter:59) - License is not installed
2018-12-19 06:34:35,175 [http-nio-8081-exec-8] [WARN ] (o.a.r.f.LicenseRestFilter:59) - License is not installed
2018-12-19 06:34:35,175 [http-nio-8081-exec-6] [WARN ] (o.a.r.f.LicenseRestFilter:59) - License is not installed
2018-12-19 06:34:35,345 [http-nio-8081-exec-4] [WARN ] (o.a.r.f.LicenseRestFilter:59) - License is not installed
2018-12-19 06:34:35,349 [http-nio-8081-exec-10] [WARN ] (o.a.r.f.LicenseRestFilter:59) - License is not installed

When I install the license by running pro in a normal docker container on my laptop I get

High Availability license is installed but HA feature is not configured.
Visit Artifactory High Availability Installation and Setup page in JFrog's wiki for detailed instructions.

So it seems the license is correct for HA

What you expected to happen:

Be able to see artifactory

[Feature] Provide for extra backup volume

We are currently using S3 to host our artifacts but would like to have the ability to backup to a seperate backup nfs volume. Would the chart be able to provide for adding that volume?

Artifactory "Distribution Cert for Artifactory" fails Artifactory startup

BUG REPORT

Version of Helm and Kubernetes:
Any

Which chart:
artifactory and artifactory-ha

What happened:
When following instructions to create distribution certs and passing them to Artifactory, exceptions are thrown by Access that prevent Artifactory from starting up.

What you expected to happen:
Access and Artifactory start up with the provided private key and certificate.

How to reproduce it (as minimally and precisely as possible):
Follow instructions in https://github.com/jfrog/charts/tree/master/stable/artifactory#create-distribution-cert-for-artifactory-edge

Anything else we need to know:
The errors are related to the fact the certificate is missing some metadata that Access is expecting to find. The certificate generation process needs to be updated.

Using external DB other than Postgres does not work

Is this a request for help?: No


Is this a BUG REPORT or FEATURE REQUEST? (choose one): bug report

Version of Helm and Kubernetes: helm: 2.11.0, k8s-client: 1.12, k8s-server: 1.10.3

Which chart: Artifactory (pro/oss)

What happened: When using external db's, the README tells you to curl the plugin for the db connection as a part of the postStart lifecycle (such as the mysql connection jar). PodStart is not guaranteed to run before the entrypoint command. This causes the artifactory container to crash because the entrypoint script exits if the plugin is not found. The pod never becomes healthy.

What you expected to happen: The pod should come up healthy using the external db.

How to reproduce it (as minimally and precisely as possible): Basically just try this:

...
--set postgresql.enabled=false \
--set artifactory.postStartCommand="curl -L -o /opt/jfrog/artifactory/tomcat/lib/mysql-connector-java-5.1.41.jar https://jcenter.bintray.com/mysql/mysql-connector-java/5.1.41/mysql-connector-java-5.1.41.jar && chown 1030:1030 /opt/jfrog/artifactory/tomcat/lib/mysql-connector-java-5.1.41.jar" \
--set database.type=mysql \
--set database.host=${DB_HOST} \
--set database.port=${DB_PORT} \
--set database.user=${DB_USER} \
--set database.password=${DB_PASSWORD} \

Anything else we need to know: My thoughts on how to fix this are either add a few retries to the plugin check in the entrypoint script of the container, or start shipping all of the plugins in the container (not ideal).

Fail when using existing claim for Artifactory

Is this a request for help?: no


Is this a BUG REPORT or FEATURE REQUEST? (choose one): BUG REPORT

Version of Helm and Kubernetes: Helm v2.8.2 - Kubernetes v1.10.1

Which chart: 7.4.2

What happened: Failed when using existing volume claim

What you expected to happen: It worked and used an existing claim

How to reproduce it (as minimally and precisely as possible): set existingClaim to a value

Anything else we need to know:
The problem is in the /templates/artifactory-statefulset.yaml file. The accessModes and resources.requests.storage is not used when an existing claim is used, and kubernetes will fail the deployment. A volumeClaimTemplates needs these fields.

Instead the template part should look like this :

  volumeClaimTemplates:
  - metadata:
      name: artifactory-volume
    spec:
    {{- if .Values.artifactory.persistence.existingClaim }}
      selector:
        matchLabels:
          app: {{ template "artifactory.name" . }}
    {{- else }}
      {{- if .Values.artifactory.persistence.storageClass }}
      {{- if (eq "-" .Values.artifactory.persistence.storageClass) }}
      storageClassName: ""
      {{- else }}
      storageClassName: "{{ .Values.artifactory.persistence.storageClass }}"
      {{- end }}
      {{- end }}
    {{- end }}
      accessModes: [ "{{ .Values.artifactory.persistence.accessMode }}" ]
      resources:
        requests:
          storage: {{ .Values.artifactory.persistence.size }}
      {{- else }}
      - name: artifactory-volume
        emptyDir: {}
      {{- end }}

So that no matter if a claim is specified or not, accessMode and resource request storage is filled out.

Errors im getting before changing the template:

Statefulset failing:
  Warning  FailedCreate  2m (x12 over 2m)  statefulset-controller  create Pod artifactory-artifactory-0 in StatefulSet artifactory-artifactory failed error: Failed to create PVC artifactory-volume-artifactory-artifactory-0: PersistentVolumeClaim "artifactory-volume-artifactory-artifactory-0" is invalid: [spec.accessModes: Required value: at least 1 access mode is required, spec.resources[storage]: Required value]
  Warning  FailedCreate  2m (x13 over 2m)  statefulset-controller  create Claim artifactory-volume-artifactory-artifactory-0 for Pod artifactory-artifactory-0 in StatefulSet artifactory-artifactory failed error: PersistentVolumeClaim "artifactory-volume-artifactory-artifactory-0" is invalid: [spec.accessModes: Required value: at least 1 access mode is required, spec.resources[storage]: Required value]

  Warning  FailedCreate  1s (x11 over 7s)  statefulset-controller  create Claim artifactory-volume-artifactory-artifactory-0 for Pod artifactory-artifactory-0 in StatefulSet artifactory-artifactory failed error: PersistentVolumeClaim "artifactory-volume-artifactory-artifactory-0" is invalid: spec.resources[storage]: Required value
  Warning  FailedCreate  1s (x11 over 7s)  statefulset-controller  create Pod artifactory-artifactory-0 in StatefulSet artifactory-artifactory failed error: Failed to create PVC artifactory-volume-artifactory-artifactory-0: PersistentVolumeClaim "artifactory-volume-artifactory-artifactory-0" is invalid: spec.resources[storage]: Required value

nginx annotations helm set syntax

The only way I could set the ingress class was to run this:

--set ingress.annotations.'kubernetes\.io/ingress\.class'=bla \
warning: destination for annotations is a table. Ignoring non-table value <nil>

Perhaps it's better to do

--set ingress.annotations[0].key="kubernets.io/ingress.class" \
--set ingress.annotations[0].value="bla" \

(helm 1.9)

Artifactory HA with S3 has a hardcoded endpoint

Is this a BUG REPORT or FEATURE REQUEST? (choose one):
BUG

The artifactory-ha helm chart has a hardcoded S3 endpoint (s3.amazonaws.com). This results in the chart not usable for non us-east-1 regions.

Version of Helm and Kubernetes:
NA

Which chart:
artifactory-ha

What happened:
Installing in EKS using us-west-2 region and S3 as filestore, failed to access S3 failing with a 400 error.

on canonical kubernetes, nginx pod is in CrashLoopBackOff stage with restarts

kubectl get pod

NAME READY STATUS RESTARTS AGE
art-ha-artifactory-ha-member-0 1/1 Running 0 27m
art-ha-artifactory-ha-member-1 1/1 Running 0 24m
art-ha-artifactory-ha-primary-0 1/1 Running 0 27m
art-ha-nginx-855576d94f-sph2m 0/1 CrashLoopBackOff 9 27m
art-ha-postgresql-85d5885bcb-swpvx 1/1 Running 0 27m
default-http-backend-x2z7k 1/1 Running 0 1h
nginx-ingress-kubernetes-worker-controller-4mhlp 1/1 Running 0 1h
nginx-ingress-kubernetes-worker-controller-qvpxs 1/1 Running 0 1h
nginx-ingress-kubernetes-worker-controller-tfk4q 1/1 Running 0 1h

kubectl logs art-ha-nginx-855576d94f-sph2m

2018-08-28 00:55:41 [139 entrypoint-nginx.sh] Preparing to run Nginx in Docker
2018-08-28 00:55:41 [11 entrypoint-nginx.sh] Dockerfile for this image can found inside the container.
2018-08-28 00:55:41 [12 entrypoint-nginx.sh] To view the Dockerfile: 'cat /docker/nginx-artifactory-pro/Dockerfile.nginx'.
2018-08-28 00:55:41 [69 entrypoint-nginx.sh] Setting up directories if missing
mkdir: cannot create directory '/var/opt/jfrog/nginx/conf.d': Permission denied
2018-08-28 00:55:41 [31 31 functions.sh] ERROR: Failed creating /var/opt/jfrog/nginx/conf.d

wrong S3 binarystore configuration in artifactory-ha

Version of Helm and Kubernetes:
NA

Which chart:
artifactory-ha

What happened:
The file artifactory-binarystore.yaml has a wrong hardcoded S3 provider configuration

2018-08-22 15:17:35,527 [eventual-cluster-worker-0] [ERROR] (o.j.s.b.p.RetryBinaryProvider:126) - Failed to check if blob 'ee0b58a1b6b0f5993e1ea6e155432e0152ba689a' exist in next binary provider
java.lang.RuntimeException: Failed to check if blob  '{sha1='ee0b58a1b6b0f5993e1ea6e155432e0152ba689a', sha2='null', md5='null', length=-1, headers={}}'  exist in s3: null - null

What you expected to happen:
When setting S3 backend for filestore, Artifactory should be able to connect to S3 to upload and retreive files.
With the current configutation, artifactory is unable to connect to S3

How to reproduce it (as minimally and precisely as possible):
Simply configure S3 has described by documentation

Anything else we need to know:

Add an option to provide a secret name for the root certificate

Is this a request for help?:

NO

Is this a BUG REPORT or FEATURE REQUEST?:

FEATURE REQUEST

Version of Helm and Kubernetes:
all versions

Which chart:
artifactory-ha and artifactory

I would like to have the option to provide the root certificate for artifactory as a kuberentes secret. This would be very helpful when configuring E+ in regards to access federation and circle of trust.

According to this guide: https://www.jfrog.com/confluence/display/MC/Managing+Access+Federation#ManagingAccessFederation-EstablishingtheCircleofTrust
I need to have the root certificate file in each edge node in order to configure the circle of trust, if I was able to provide it as a parameter/as a secret and reference its name, it would be much easier.

We cant update running Artifactory server expired license by updating the license secret

BUG REPORT

Which chart:
artifactory-ha

We mention in the documentation (in the Install Artifactory HA license):

IMPORTANT: You should use only one of the following methods. Switching between them while a cluster is running might disable your Artifactory HA cluster!

However, in case we have installed Artifactory while providing a secret for the license and the license has expired, with the current implementation, we cant update it by updating the license secret and we most update it via the UI ( or REST API).

Xray chart readme - Missing escaping '\' in the mongodb connection string example

Is this a request for help?: No


Is this a BUG REPORT or FEATURE REQUEST? (choose one): BUG REPORT

Version of Helm and Kubernetes:
helm v2.11.0

Which chart: Xray

What happened: When using the mongodb connection string given in the readme example xary server is failed to start due to wrong replacement of the connection string in the application config file.
output: mongodb://xray:[email protected]:27017/?authSource=xrayMONGODB_URL&authMechanism=SCRAM-SHA-1

What you expected to happen:
output: mongodb://xray:[email protected]:27017/?authSource=xray&authMechanism=SCRAM-SHA-1

How to reproduce it (as minimally and precisely as possible):
follow the example for using external mongodb

Anything else we need to know:
Fix: escaping the '&' in the connection string.
mongodb://xray:[email protected]:27017/?authSource=xray\&authMechanism=SCRAM-SHA-1

Nginx container starts after second node deployment during artifactory-ha helm chart deployment

Is this a request for help?:


Is this a BUG REPORT or FEATURE REQUEST? (choose one):

Version of Helm and Kubernetes:
0.4.0

Which chart:
artifactory-ha

What happened:
After deploying the chart, the Nginx container starts up and the container's lifecycle postStart event handler is kicked off but then the container is killed off and a new one is created. Please note the following from the log below:

"Killing container with id docker://nginx:FailedPostStartHook"

Normal Pulling 6m kubelet, artifactory-ha-1-openstack-helm-server-u4publsxw2wc pulling image "docker.bintray.io/jfrog/nginx-artifactory-pro:6.2.0"
Normal Pulled 4m kubelet, artifactory-ha-1-openstack-helm-server-u4publsxw2wc Successfully pulled image "docker.bintray.io/jfrog/nginx-artifactory-pro:6.2.0"
Warning FailedPostStartHook 4m kubelet, artifactory-ha-1-openstack-helm-server-u4publsxw2wc Exec lifecycle hook ([/bin/sh -c if [ -f /tmp/replicator-nginx.conf ]; then cp -fv /tmp/replicator-nginx.conf /etc/nginx/conf.d/replicator-nginx.conf; fi; if [ -f /tmp/ssl/.crt ]; then rm -rf /var/opt/jfrog/nginx/ssl/example.; cp -fv /tmp/ssl/* /var/opt/jfrog/nginx/ssl; fi; until [ -f /etc/nginx/conf.d/artifactory.conf ]; do sleep 1; done; if ! grep -q 'upstream' /etc/nginx/conf.d/artifactory.conf; then sed -i -e 's,proxy_pass .,proxy_pass http://art-artifactory-ha:8081/artifactory/;,g'
-e 's,server_name .
,server_name ~(?.+)\.art-artifactory-ha art-artifactory-ha;,g'
/etc/nginx/conf.d/artifactory.conf;
fi; if ! grep -q 'proxy_http_version' /etc/nginx/conf.d/artifactory.conf; then sed -i 's,(proxy_next_upstream .),proxy_http_version 1.1;\n \1,g' /etc/nginx/conf.d/artifactory.conf; fi; sleep 5; nginx -s reload; touch /var/log/nginx/conf.done
]) for Container "nginx" in Pod "art-nginx-7f94bb7698-lhb5h_osh-infra(bc9c2801-bb5a-11e8-9ab6-fa163ec90eef)" failed - error: command '/bin/sh -c if [ -f /tmp/replicator-nginx.conf ]; then cp -fv /tmp/replicator-nginx.conf /etc/nginx/conf.d/replicator-nginx.conf; fi; if [ -f /tmp/ssl/
.crt ]; then rm -rf /var/opt/jfrog/nginx/ssl/example.; cp -fv /tmp/ssl/ /var/opt/jfrog/nginx/ssl; fi; until [ -f /etc/nginx/conf.d/artifactory.conf ]; do sleep 1; done; if ! grep -q 'upstream' /etc/nginx/conf.d/artifactory.conf; then sed -i -e 's,proxy_pass .,proxy_pass http://art-artifactory-ha:8081/artifactory/;,g'
-e 's,server_name .
,server_name ~(?.+)\.art-artifactory-ha art-artifactory-ha;,g'
/etc/nginx/conf.d/artifactory.conf;
fi; if ! grep -q 'proxy_http_version' /etc/nginx/conf.d/artifactory.conf; then sed -i 's,(proxy_next_upstream .*),proxy_http_version 1.1;\n \1,g' /etc/nginx/conf.d/artifactory.conf; fi; sleep 5; nginx -s reload; touch /var/log/nginx/conf.done
' exited with 137: , message: ""
Normal Created 4m (x2 over 4m) kubelet, artifactory-ha-1-openstack-helm-server-u4publsxw2wc Created container
Normal Killing 4m kubelet, artifactory-ha-1-openstack-helm-server-u4publsxw2wc Killing container with id docker://nginx:FailedPostStartHook
Normal Pulled 4m kubelet, artifactory-ha-1-openstack-helm-server-u4publsxw2wc Container image "docker.bintray.io/jfrog/nginx-artifactory-pro:6.2.0" already present on machine
Normal Started 4m (x2 over 4m) kubelet, artifactory-ha-1-openstack-helm-server-u4publsxw2wc Started container

What you expected to happen:
The Nginx container and the postStart event should run and be successful and a restart of the container shouldn't be happening. Otherwise, the Nginx configMap from nginx-replicator-conf.yaml may not be used.

How to reproduce it (as minimally and precisely as possible):
The issue is intermittent and looks to be some sort of race condition. When it does happen, the Nginx container starts up after the second Artifactory node (artifactory-ha-artifactory-ha-member-0) and before the third (artifactory-ha-artifactory-ha-member-1). Then a restart happens on the Nginx container which is causing a longer than expected deployment.

kubectl get pods (ran after second node deployment)
NAME READY STATUS RESTARTS AGE
artifactory-ha-artifactory-ha-member-0 1/1 Running 0 4m
artifactory-ha-artifactory-ha-member-1 0/1 Running 0 1m
artifactory-ha-artifactory-ha-primary-0 1/1 Running 0 4m
artifactory-ha-nginx-57df8c7d66-vdjcv 1/1 Running 1 4m
artifactory-ha-postgresql-9d4dc6bc7-rm8rc 1/1 Running 0 4m

kubectl describe pods artifactory-ha-nginx-57df8c7d66-vdjcv
Normal Killing 2m kubelet, gke-marcb-cluster-default-pool-2ab0ef6f-559k Killing container with id docker://nginx:FailedPostStartHook
Normal Created 2m (x2 over 2m) kubelet, gke-marcb-cluster-default-pool-2ab0ef6f-559k Created container
Normal Started 2m (x2 over 2m) kubelet, gke-marcb-cluster-default-pool-2ab0ef6f-559k Started container
Normal Pulled 2m kubelet, gke-marcb-cluster-default-pool-2ab0ef6f-559k Container image "docker.bintray.io/jfrog/nginx-artifactory-pro:6.3.2" already present on machine

Anything else we need to know:

Use secret for external MongoDB

Is this a request for help?:
FEATURE REQUEST

Is this a BUG REPORT or FEATURE REQUEST? (choose one):
FEATURE REQUEST

Version of Helm and Kubernetes:
Helm: 2.9.1 K8s: 1.10.0

Which chart:
Xray

What happened:
MONGODB_URL is using variable instead of secret.

What you expected to happen:
Use secret for MONGODB_URL

How to reproduce it (as minimally and precisely as possible):

Anything else we need to know:

[Proposal] Provide different values for small (default), medium & large deployments

What
Ship 3 flavors of values.yaml so that a user (dev, SML customer) can pick the flavor of the file that's more relevant and is only exposed to a very minimum set of properties that they have to modify, versus the entire set. There is no need for the user to touch the main values.yaml file and instead they should only be modifying the relevant file that includes a very minimum set. And then pass the modular file via -f custom-values-file option.

Why
Having a single values.yaml is too much especially for customers who are new to helm. Relevant details such compute, secret names are missed because they are hidden among 100s of key-value pairs. Plus these values are not optimized for their use case. Example: For a customer that runs 300 concurrent builds with 1GB payload, the default compute values are not enough. Or in case of Xray, the memory assigned to MongoDB should be atleast 6GB for staging and maybe 12GB for production. This would enhance the UX for our customers and even reduce the readme size.

Optionally include primary node into poddisruptionbudget

Is this a BUG REPORT or FEATURE REQUEST? (choose one):

FEATURE REQUEST

Which chart: artifactory-ha

When primary node is added to member service (with "pool: all" instead of "pool: members" in values.yaml) it should also be added to poddisruptionbudget to prevent a situation when there is only two nodes (primary and member-0) and both are deleted at the same time during eviction or similar operation

Feature Request: Azure Blob Storage

Is this a BUG REPORT or FEATURE REQUEST? (choose one): Feature Request

Just like we are able to use S3 and GCS, it would be nice to have Azure blob storage support.
It should accept azure blob storage credentials in the chart itself such as the account name and the access key.

Artifactory HA - Harcoded parameters in S3 provider

BUG REPORT

Which chart:
artifactory-ha

identity and credential parameters are hardcoded in the template and are still injected even if they are empty. This is causing the Artifactory to fail on start up when the S3 provider is set to connect via IAM Role.

https://github.com/jfrog/charts/blob/master/stable/artifactory-ha/templates/artifactory-binarystore.yaml

    <provider id="s3" type="s3">
            <endpoint>{{ .Values.artifactory.persistence.awsS3.endpoint }}</endpoint>
            <refreshCredentials>{{ .Values.artifactory.persistence.awsS3.refreshCredentials }}</refreshCredentials>
            <testConnection>{{ .Values.artifactory.persistence.awsS3.testConnection }}</testConnection>
            <httpsOnly>true</httpsOnly>
            <region>{{ .Values.artifactory.persistence.awsS3.region }}</region>
            <bucketName>{{ .Values.artifactory.persistence.awsS3.bucketName }}</bucketName>
     Optional --> <identity>{{ .Values.artifactory.persistence.awsS3.identity }}</identity>
     Optional --> <credential>{{ .Values.artifactory.persistence.awsS3.credential }}</credential>
            <path>{{ .Values.artifactory.persistence.awsS3.path }}</path>
            {{- range $key, $value := .Values.artifactory.persistence.awsS3.properties }}
            <property name="{{ $key }}" value="{{ $value }}"/>
            {{- end }}**
    </provider>

helm search <reponame> gives nothing

Is this a request for help?: YES


How can i search the repo after deploying the helm chart? I can see the chart in the jfrog ui but get nothing returned after running

helm repo update
helm search <repo name>

i need that to confirm versions or charts, whats there etc

[Feature] Enable JMX monitoring

Would it be possible to enable the chart to expose the JMX monitoring port via a value option for use with Prometheus's JMX exporter

GKE ingress controller will failed with "unhealthy" backend because of the Readiness Probe Path

The xray-server template has a readiness probe to path "/"

       readinessProbe:
          httpGet:
            path: /
            port: {{ .Values.server.internalPort }}
          initialDelaySeconds: 60
          periodSeconds: 10
          failureThreshold: 10
        livenessProbe:
          httpGet:
            path: /
            port: {{ .Values.server.internalPort }}
          initialDelaySeconds: 90
          periodSeconds: 10

When installing xray on gke using it's ingress controller, health check will be created pointing to the root path and fall back to the readinessProbe path, which is still the root path.

These will result in an "unhealthy" backend service since if you ping xray from the worker node, you will see that it returns 301 at root path and thought it as unhealty. In stead, the health check should be really looking at path "api/v1/system/ping"

$ curl http://localhost:31469/api/v1/system/ping
{"status":"pong"}

$ curl http://localhost:31469/
<a href="/web">Moved Permanently</a>.

If we manually changed the health check endpoint to '"api/v1/system/ping", the backend healthy check will pass, and xray server can be reached.

You may want to update the readiness probe in the template

Documentation: Upgrade steps

What
Discourage upgrades by updating image tag instead of using new chart version. Remove the comment for the image.
Document steps that encourage re-using values

Why
We want to make sure that the upgrade steps are less error-prone.

Missing quote for ingress host names in all charts

Is this a BUG REPORT or FEATURE REQUEST? (choose one):

BUG REPORT
Version of Helm and Kubernetes:

Which chart:

All charts

What happened:

Charts are not compatible with ingres hosts names begining with '*', Helm complains about syntax

What you expected to happen:

How to reproduce it (as minimally and precisely as possible):

Anything else we need to know:

Ingress host names should be quoted like this:

  {{- range $host := .Values.ingress.hosts }}
  - host: {{ $host | quote }}

See how the ingress template is created with "helm create" command.

Allow pod anti-affinity settings to include primary node

Is this a BUG REPORT or FEATURE REQUEST? (choose one):

FEATURE REQUEST

Version of Helm and Kubernetes:

Which chart: artifactory-ha

Currently, pod anti-affinity applies only to member nodes, which may result in a member pod being scheduled onto the same host where primary node is already running. In a case when we have only two nodes, primary and member, such situation will cause a complete unavailability of Artifactory service. A solution could be to avoid limiting poddisruptionbugdet to node role "member", like it is alredy done with service definition here:
https://github.com/jfrog/charts/blob/master/stable/artifactory-ha/templates/artifactory-service.yaml#L20-L22

Directory /var/opt/jfrog/artifactory has bad permissions for user 'artifactory'

Is this a request for help?:
Yes

Is this a BUG REPORT or FEATURE REQUEST? (choose one):

Version of Helm and Kubernetes:
Kubernetes
1.10.8_1530
Helm client and server: v2.11.0

Which chart:
When installing using the instructions from https://github.com/jfrog/charts/tree/master/stable/artifactory, using the command
helm install --name artifactory --set artifactory.image.repository=docker.bintray.io/jfrog/artifactory-oss jfrog/artifactory

What happened:
Pod artifactory-artifactory-0 is in Waiting: CrashLoopBackOff

/var/opt/jfrog/artifactory is NOT writable!
2018-12-05 08:59:21 [115 entrypoint-artifactory.sh] Directory: /var/opt/jfrog/artifactory, permissions: 755, owner: nobody, group: UNKNOWN
2018-12-05 08:59:21 [116 entrypoint-artifactory.sh] Mounted directory must be writable by user 'artifactory' (id 1030)

What you expected to happen:
Pod to start

How to reproduce it (as minimally and precisely as possible):
Install artifactory with the instructions from https://github.com/jfrog/charts/tree/master/stable/artifactory, using the command
helm install --name artifactory --set artifactory.image.repository=docker.bintray.io/jfrog/artifactory-oss jfrog/artifactory

Anything else we need to know:
logs from the pod:
Logs:
Preparing to run Artifactory in Docker

2018-12-05 08:59:20 [44 entrypoint-artifactory.sh] Dockerfile for this image can found inside the container.
2018-12-05 08:59:20 [45 entrypoint-artifactory.sh] To view the Dockerfile: 'cat /docker/artifactory-oss/Dockerfile.artifactory'.
2018-12-05 08:59:20 [50 entrypoint-artifactory.sh] Checking open files and processes limits
2018-12-05 08:59:20 [53 entrypoint-artifactory.sh] Current max open files is 1048576
2018-12-05 08:59:20 [65 entrypoint-artifactory.sh] Current max open processes is unlimited
2018-12-05 08:59:20 [75 entrypoint-artifactory.sh] Checking if /var/opt/jfrog/artifactory is mounted
2018-12-05 08:59:20 [80 entrypoint-artifactory.sh] /var/opt/jfrog/artifactory is mounted
2018-12-05 08:59:20 [92 entrypoint-artifactory.sh] Testing directory (/opt/jfrog/artifactory) has read/write permissions
2018-12-05 08:59:21 [100 entrypoint-artifactory.sh] /opt/jfrog/artifactory has read/write permissions for artifactory
2018-12-05 08:59:21 [92 entrypoint-artifactory.sh] Testing directory (/var/opt/jfrog/artifactory) has read/write permissions
/entrypoint-artifactory.sh: line 97: /var/opt/jfrog/artifactory/test-permissions: Permission denied
2018-12-05 08:59:21 [113 entrypoint-artifactory.sh] ###########################################################
2018-12-05 08:59:21 [114 entrypoint-artifactory.sh] /var/opt/jfrog/artifactory is NOT writable!
2018-12-05 08:59:21 [115 entrypoint-artifactory.sh] Directory: /var/opt/jfrog/artifactory, permissions: 755, owner: nobody, group: UNKNOWN
2018-12-05 08:59:21 [116 entrypoint-artifactory.sh] Mounted directory must be writable by user 'artifactory' (id 1030)
2018-12-05 08:59:21 [117 entrypoint-artifactory.sh] ###########################################################
2018-12-05 08:59:21 [34 entrypoint-artifactory.sh] ERROR: Directory /var/opt/jfrog/artifactory has bad permissions for user 'artifactory'

Add parameters for Artifactory database name

Is this a BUG REPORT or FEATURE REQUEST? (choose one):

FEATURE REQUEST

Allow using other name than "artifactory" for database name.

Seems that the Docker image can use a DB_NAME env parameter but the chart is not using it.

Which chart:

Artifactory

Docker cannot connect to Repository Path repository

Is this a request for help?: No


Is this a BUG REPORT or FEATURE REQUEST? (choose one): BUG REPORT

Version of Helm and Kubernetes:

Helm (but we are using Rancher to deploy Artifactory, not Helm directly):
Client: &version.Version{SemVer:"v2.11.0", GitCommit:"2e55dbe1fdb5fdb96b75ff144a339489417b146b", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.10.0", GitCommit:"9ad53aac42165a5fadc6c87be0dea6b115f93090", GitTreeState:"clean"}

Kubernetes:
Client Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.1", GitCommit:"4ed3216f3ec431b140b1d899130a69fc671678f4", GitTreeState:"clean", BuildDate:"2018-10-05T16:46:06Z", GoVersion:"go1.10.4", Compiler:"gc", Platform:"windows/amd64"}
Server Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.1", GitCommit:"b1b29978270dc22fecc592ac55d903350454310a", GitTreeState:"clean", BuildDate:"2018-07-17T18:43:26Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}

Which chart: artifactory

What happened: Error when trying to docker login.

What you expected to happen: Artifactory can be used as a Docker repository in "Repository Path" mode without further (proxy) configuration.

How to reproduce it (as minimally and precisely as possible):

  1. Deploy Artifactory using official Helm chart with ingress.enabled = true
  2. Configure Docker repositories
  3. Use "Repository path" method
  4. Create a user to access
  5. Execute docker login
  6. Enter credentials

Anything else we need to know:

The problem only shows up when accessing Artifactory through Ingress / nginx. Then, we cannot login to Docker repository:

[rancher@intern-worker-lg3 ~]$ docker login artifactory.our.hostname
Username: docker
Password:
Error response from daemon: Get https://artifactory.our.hostname/v2/: unable to decode token response: EOF

Artifactory requests log also looks wrong as the call to /v2/token shown up as /v2 only:

20181015205810|1|REQUEST|10.0.0.22|non_authenticated_user|GET|/api/docker/v2/|HTTP/1.1|401|0
20181015205810|1|REQUEST|10.0.0.22|non_authenticated_user|GET|/api/docker/v2/|HTTP/1.1|200|0

It does work when bypassing the Ingress/nginx reverse proxy by connecting to the K8s service directly:

[rancher@intern-worker-lg3 ~]$ docker login 10.43.245.110:8081
Username: docker
Password:
Login Succeeded

Artifactory request logs look better:

20181015205657|1|REQUEST|10.0.0.10|non_authenticated_user|GET|/api/docker/v2/|HTTP/1.1|401|0
20181015205657|237|REQUEST|10.0.0.22|k8sworker|GET|/api/docker/null/v2/token|HTTP/1.1|200|0
20181015205657|9|REQUEST|10.0.0.10|k8sworker|GET|/api/docker/v2/|HTTP/1.1|200|0

There is a workaround:

In a session with JFrog support team, we tried to set some proxy headers in Nginx proxy, which didn't help. We could solve it by adding a rewrite rule to the reverse proxy: rewrite ^/v2/token$ /artifactory/api/docker/null/v2/token;. This can be applied by setting the Helm chart value ingress.annotations.nginx\.ingress\.kubernetes\.io/configuration-snippet.

OSS deployment fails due to failing Readiness Probe

Is this a BUG REPORT or FEATURE REQUEST? (choose one): Bug

Version of Helm and Kubernetes: 2.11.0 and 1.9.7-gke.6

Which chart: jfrog/artifactory

What happened:

When attempting to pass in the custom docker image to use artifactory oss, the Readiness Probe fails repeatedly due to a 404

I initially launched this chart with the default settings other than switching docker image to oss

helm install --set artifactory.image.repository="docker.bintray.io/jfrog/artifactory-oss" jfrog/artifactory

at which point I looked at the pod details and saw

Warning Unhealthy 3m17s (x2 over 3m26s) kubelet, gke-prod-cluster-default-pool Readiness probe failed: HTTP probe failed with statuscode: 404

Wanting to dive a little deeper before passing the issue over to you guys, I then turned off the readiness probe, launched the service and port forwarded the pod's 8081 port to my localhost and was able to hit the base port, at which point it looks like artifactory attempts to forward my request to /artifactory which in turn also results in a 404

What you expected to happen: Readiness probe to succeed

How to reproduce it (as minimally and precisely as possible):

helm install --set artifactory.image.repository="docker.bintray.io/jfrog/artifactory-oss" jfrog/artifactory

Issue with subcharting Artifactory

Is this a request for help?:


Is this a BUG REPORT or FEATURE REQUEST? (choose one):

Version of Helm and Kubernetes: Kubernetes 1.10.7 and Helm 2.9.1

Which chart: Artifactory latest stable

What happened: Artifactory invalid postgres password, can't connect 404

What you expected to happen: Everything to work like the regular non overridded install

How to reproduce it (as minimally and precisely as possible): Use the following values.yaml and call artifactory as a subchart. (I can attach the whole folder if want)

Anything else we need to know: When checking the passwords on postgresql + artifactory they are the same.

mission control command not found

Is this a request for help?:

-No

Is this a BUG REPORT or FEATURE REQUEST? (choose one):
BUG REPORT

Version of Helm and Kubernetes:
helm - v2.11.0
kubernetes - 1.11.5-gke.4

Which chart:
mission-control

What happened:
mission control init containers has error messages that say "command not found"

What you expected to happen:
the errorExit function will be executed

How to reproduce it (as minimally and precisely as possible):
helm install jfrog/mission-control
look at the mission control pod logs

Anything else we need to know:

When I run helm install jfrog/mission-control, I see the following error:

image

Insight Password verification failed

Is this a request for help?:
Yes

Is this a BUG REPORT or FEATURE REQUEST? (choose one):
BUG REPORT

Version of Helm and Kubernetes:
helm version
Client: &version.Version{SemVer:"v2.9.1", GitCommit:"20adb27c7c5868466912eebdf6664e7390ebe710", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.9.1", GitCommit:"20adb27c7c5868466912eebdf6664e7390ebe710", GitTreeState:"clean"}

kubectl version
Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.7", GitCommit:"dd5e1a2978fd0b97d9b78e1564398aeea7e7fe92", GitTreeState:"clean", BuildDate:"2018-04-19T00:05:56Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"9+", GitVersion:"v1.9.7-gke.6", GitCommit:"9b635efce81582e1da13b35a7aa539c0ccb32987", GitTreeState:"clean", BuildDate:"2018-08-16T21:33:47Z", GoVersion:"go1.9.3b4", Compiler:"gc", Platform:"linux/amd64"}

Which chart:
mission-control version 0.4.4

What happened:
The following error below is encountered upon deployment of mission control helm chart even when generate_keys.sh was executed and following commands were executed to create and pass secret to deployment:
kubectl create secret generic mission-control-certs --from-file=./certs/insight-server/etc/security/insight.key --from-file=./certs/insight-server/etc/security/insight.crt --from-file=./certs/insight-server/etc/security/jfmc.crt --from-file=./certs/mission-control/etc/security/jfmc-truststore.jks-b64 --from-file=./certs/mission-control/etc/security/jfmc-keystore.jks-b64

helm install --name mission-control --set existingCertsSecret=mission-control-certs jfrog/mission-control

2018-10-04 21:46:54,596 [pool-6-thread-2] [ERROR] (o.j.m.s.i.InsightService:59) - Error occurred while refreshing Insight services.
org.jfrog.mc.service.exception.ServiceException: Error occured while creating the SSLContext: Keystore was tampered with, or password was incorrect
	at org.jfrog.mc.insight.InsightClient.createSslContext(InsightClient.java:213)
	at org.jfrog.mc.insight.InsightClient.createSslClient(InsightClient.java:135)
	at org.jfrog.mc.insight.InsightClient.doSslExecute(InsightClient.java:165)
	at org.jfrog.mc.insight.InsightClient.replaceServices(InsightClient.java:100)
	at org.jfrog.mc.service.insight.InsightService.replaceServices(InsightService.java:53)
	at org.jfrog.mc.service.RefreshInstanceDataServiceImpl.replaceAllInstancesOnInsight(RefreshInstanceDataServiceImpl.java:103)
	at org.jfrog.mc.service.RefreshInstanceDataServiceImpl.replaceAllInstancesOnInsight(RefreshInstanceDataServiceImpl.java:97)
	at org.jfrog.mc.service.RefreshInstanceDataServiceImpl$$FastClassBySpringCGLIB$$26fc11a2.invoke(<generated>)
	at org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:204)
	at org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.invokeJoinpoint(CglibAopProxy.java:738)
	at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:157)
	at org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:92)
	at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:179)
	at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:673)
	at org.jfrog.mc.service.RefreshInstanceDataServiceImpl$$EnhancerBySpringCGLIB$$87aa8adf.replaceAllInstancesOnInsight(<generated>)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:333)
	at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:207)
	at com.sun.proxy.$Proxy164.replaceAllInstancesOnInsight(Unknown Source)
	at org.jfrog.mc.service.AsyncRefreshInstanceDataService.replaceAllInstancesOnInsight(AsyncRefreshInstanceDataService.java:52)
	at org.jfrog.mc.service.AsyncRefreshInstanceDataService$$FastClassBySpringCGLIB$$7dfa1990.invoke(<generated>)
	at org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:204)
	at org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.invokeJoinpoint(CglibAopProxy.java:738)
	at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:157)
	at org.springframework.aop.framework.adapter.MethodBeforeAdviceInterceptor.invoke(MethodBeforeAdviceInterceptor.java:52)
	at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:168)
	at org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:92)
	at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:179)
	at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:673)
	at org.jfrog.mc.service.AsyncRefreshInstanceDataService$$EnhancerBySpringCGLIB$$3ba341ea.replaceAllInstancesOnInsight(<generated>)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.springframework.scheduling.support.ScheduledMethodRunnable.run(ScheduledMethodRunnable.java:65)
	at org.springframework.scheduling.support.DelegatingErrorHandlingRunnable.run(DelegatingErrorHandlingRunnable.java:54)
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
	at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
	at java.lang.Thread.run(Thread.java:748)
Caused by: java.io.IOException: Keystore was tampered with, or password was incorrect
	at sun.security.provider.JavaKeyStore.engineLoad(JavaKeyStore.java:780)
	at sun.security.provider.JavaKeyStore$JKS.engineLoad(JavaKeyStore.java:56)
	at sun.security.provider.KeyStoreDelegator.engineLoad(KeyStoreDelegator.java:224)
	at sun.security.provider.JavaKeyStore$DualFormatJKS.engineLoad(JavaKeyStore.java:70)
	at java.security.KeyStore.load(KeyStore.java:1445)
	at org.jfrog.mc.insight.InsightClient.createSslContext(InsightClient.java:196)
	... 45 common frames omitted
Caused by: java.security.UnrecoverableKeyException: Password verification failed
	at sun.security.provider.JavaKeyStore.engineLoad(JavaKeyStore.java:778)
	... 50 common frames omitted

What you expected to happen:
Keystore should have been set properly for Insight and error should not occur.

How to reproduce it (as minimally and precisely as possible):

  1. Create GKE cluster
  2. Clone helm charts
  3. Execute generate_keys.sh
  4. Modify values.yaml with the minimum requirements
  5. Generate the secret
  6. Deploy mission-control helm chart with created secret
  7. Microservices will be deployed and running but Mission Control log will have the error above.

Anything else we need to know:

[bug]: Shorten charts release name

The release names generated with charts-testing chart_test.sh script are too long for statefullsets as it comes to 65 for some metadata.labels, so pods do not get created:

$ helm ls
artifactory-ha-7219pws7otxtcr6g
$ kubectl describe -n artifactory-ha-dmn0miwywb4tcegj statefulset
...
  Warning  FailedCreate      3m (x25 over 8m)  statefulset-controller  create Pod artifactory-ha-dmn0miwywb4tcegj-artifactory-ha-member-0 in StatefulSet artifactory-ha-dmn0miwywb4tcegj-artifactory-ha-member failed error: Pod "artifactory-ha-dmn0miwywb4tcegj-artifactory-ha-member-0" is invalid: metadata.labels: Invalid value: "artifactory-ha-dmn0miwywb4tcegj-artifactory-ha-member-545fd89668": must be no more than 63 characters
...

This is why it is failing on helm/charts#7320 (comment)

[Feature] Add NetworkPolicy to each chart to restrict traffic between pods

Is this a request for help?:

No


Is this a BUG REPORT or FEATURE REQUEST? (choose one):
FEATURE REQUEST:
To have better control and secure structure of traffic between each application's pods, every chart has to also deploy a NetworkPolicy descriptor with the matching rules.

  • Deny all
  • Allow traffic from pods in same namespace on application ports
  • Allow traffic from external resources to public endpoint only

Version of Helm and Kubernetes:
Any

Which chart:
All

nginx keeps loading continuously

Version of Helm and Kubernetes: helm version 2.11.0 Kubernetes 1.10

Which chart: Artifactory HA with external Postgres and NGinx LB.

What happened:
The artifactory is up and running on 3 nodes. The HA entries are added in the nginx config as upstream artifactory {
server 192.x.x.1:8081;
server 192.x.x.2:8081;
server 192.x.x.3:8081;
}

The nginx "updateConf.sh" is updating the artifactory.conf with this entry. And when it checks again the artifactory api returns the nginx config with same entries but different order like:
upstream artifactory {
server 192.x.x.2:8081;
server 192.x.x.3:8081;
server 192.x.x.1:8081;
}

This makes the command "local diffWithCurrentConf=$(diff -b ${NGINX_CONF} <(echo "$reverseProxyConf"))" return a difference. Which causes the config to load again. As the API keeps returning the values in different order, nginx is in a loop of updating the configurations continuously.

What you expected to happen: I would have expected it to understand that the line sequence is changed but not the file itself.

How to reproduce it (as minimally and precisely as possible):
Run multi node HA with external postgres. Update the HTTP config once artifactory is up and running

Anything else we need to know:

Add license to this repository

Is this a request for help?:


Is this a BUG REPORT or FEATURE REQUEST? (choose one):
FEATURE REQUEST

Version of Helm and Kubernetes:
N/A

Which chart:
All

What happened:
Need a license in repository

What you expected to happen:

How to reproduce it (as minimally and precisely as possible):

Anything else we need to know:
We need to agree on how do we want to add a license. Should it be at repo level or at chart level?

Add support to run Mongodb as without root user.

Is this a request for help?:

Feature Request to allow Mongodb to run without root permission. MongoDB chart version 4.0.0 has this support. Here is the link to that commit.


Is this a BUG REPORT or FEATURE REQUEST? (choose one):
FEATURE REQUEST

Version of Helm and Kubernetes:
Helm 2.9.1 K8s 1.10.0

Which chart:
Xray, Mission-Control

What happened:

What you expected to happen:
Mongodb should run without root user.

How to reproduce it (as minimally and precisely as possible):

Anything else we need to know:

Error when multiple Nginx (replicaCount > 1)

Version of Helm and Kubernetes:
NA

Which chart:
artifactory-ha

What happened:
When setting nginx.replicaCount=2, and nginx.persistance.enabled=true, The second instance of Nginx fails to start because volume claim is already bound to first instance:

kubectl get pods | grep artifactory

artifactory-artifactory-ha-member-0                    1/1       Running    0          9m
artifactory-artifactory-ha-member-1                    1/1       Running    0          7m
artifactory-artifactory-ha-primary-0                   1/1       Running    0          9m
artifactory-nginx-69f8d97768-kmpjd                     0/1       Init:0/2   0          9m
artifactory-nginx-69f8d97768-l95vs                     1/1       Running    0          9m
kubectl get pvc | grep artifactory

artifactory-nginx                                                Bound     pvc-e899f622-ab99-11e8-b520-0a9fd9e2d170   5Gi        RWO            standard       10m
volume-artifactory-artifactory-ha-member-0                       Bound     pvc-e8b40093-ab99-11e8-ac7b-1237bc16f1bc   200Gi      RWO            standard       10m
volume-artifactory-artifactory-ha-member-1                       Bound     pvc-4291763c-ab9a-11e8-ac7b-1237bc16f1bc   200Gi      RWO            standard       7m
volume-artifactory-artifactory-ha-primary-0                      Bound     pvc-e8bb615f-ab99-11e8-ac7b-1237bc16f1bc   200Gi      RWO            standard       10m
kubectl describe pod artifactory-nginx-69f8d97768-kmpjd

[...]
Events:
  Type     Reason                 Age                From                                 Message
  ----     ------                 ----               ----                                 -------
  Warning  FailedScheduling       47s (x3 over 48s)  default-scheduler                    PersistentVolumeClaim is not bound: "artifactory-nginx" (repeated 12 times)
  Normal   Scheduled              45s                default-scheduler                    Successfully assigned artifactory-nginx-69f8d97768-kmpjd to ip-10-80-2-89.ec2.internal
  Normal   SuccessfulMountVolume  44s                kubelet, ip-10-80-2-89.ec2.internal  MountVolume.SetUp succeeded for volume "artifactory-artifactory-ha-token-6wv9c"
  Warning  FailedMount            44s                attachdetach-controller              AttachVolume.Attach failed for volume "pvc-e899f622-ab99-11e8-b520-0a9fd9e2d170" : "Error attaching EBS volume \"vol-0c875b4587a4178e4\"" to instance "i-04aaaeb9a03b14d7d" since volume is in "creating" state
  Warning  FailedAttachVolume     33s                attachdetach-controller              Multi-Attach error for volume "pvc-e899f622-ab99-11e8-b520-0a9fd9e2d170" Volume is already exclusively attached to one node and can't be attached to another

What you expected to happen:

It is not clear if and why persistent volumes should be used for NGinx.
Can you precise the use of these volumes and when they should be used or not ?

What should be the expected behaviour ? Should multiple persistent volumes and persistant volume claims must be created or the same volume should be shared with appropriate accessMode ?

How to reproduce it (as minimally and precisely as possible):

helm install --name artifactory --set nginx.replicaCount=2 jfrog/artifactory-ha

Anything else we need to know:

ConfigMaps for security.import.xml and artifactory.config.import.xml can't copy

Is this a request for help?: Yes


Is this a BUG REPORT or FEATURE REQUEST? (choose one): Bug

Version of Helm and Kubernetes:

K8S: v1.10.6
Helm: v2.9.1

Which chart: Artifactory 7.6.1

What happened: ConfigMaps for security.import.xml and artifactory.config.import.xml can't copy.

  Type     Reason                  Age                From                                Message
  ----     ------                  ----               ----                                -------
  Normal   SuccessfulMountVolume   1m                 kubelet, k8s-linuxpool1-16822035-0  MountVolume.SetUp succeeded for volume "artifactory-license"
  Normal   SuccessfulMountVolume   1m                 kubelet, k8s-linuxpool1-16822035-0  MountVolume.SetUp succeeded for volume "bootstrap-config"
  Normal   Scheduled               1m                 default-scheduler                   Successfully assigned artifactory-artifactory-0 to k8s-linuxpool1-16822035-0
  Normal   SuccessfulMountVolume   1m                 kubelet, k8s-linuxpool1-16822035-0  MountVolume.SetUp succeeded for volume "artifactory-artifactory-token-27wwz"
  Normal   SuccessfulAttachVolume  1m                 attachdetach-controller             AttachVolume.Attach succeeded for volume "pvc-2d274fb1-d020-11e8-8166-000d3a4e189f"
  Normal   SuccessfulMountVolume   50s                kubelet, k8s-linuxpool1-16822035-0  MountVolume.SetUp succeeded for volume "pvc-2d274fb1-d020-11e8-8166-000d3a4e189f"
  Normal   Pulling                 49s                kubelet, k8s-linuxpool1-16822035-0  pulling image "alpine:3.6"
  Normal   Pulled                  49s                kubelet, k8s-linuxpool1-16822035-0  Successfully pulled image "alpine:3.6"
  Normal   Pulled                  48s                kubelet, k8s-linuxpool1-16822035-0  Successfully pulled image "alpine:3.6"
  Normal   Created                 48s                kubelet, k8s-linuxpool1-16822035-0  Created container
  Normal   Started                 48s                kubelet, k8s-linuxpool1-16822035-0  Started container
  Normal   Pulling                 48s                kubelet, k8s-linuxpool1-16822035-0  pulling image "alpine:3.6"
  Normal   Created                 47s                kubelet, k8s-linuxpool1-16822035-0  Created container
  Normal   Started                 47s                kubelet, k8s-linuxpool1-16822035-0  Started container
  Normal   Pulling                 11s (x2 over 47s)  kubelet, k8s-linuxpool1-16822035-0  pulling image "docker.bintray.io/jfrog/artifactory-pro:6.5.0"
  Normal   Killing                 11s                kubelet, k8s-linuxpool1-16822035-0  Killing container with id docker://artifactory:FailedPostStartHook
  Normal   Pulled                  10s (x2 over 46s)  kubelet, k8s-linuxpool1-16822035-0  Successfully pulled image "docker.bintray.io/jfrog/artifactory-pro:6.5.0"
  Normal   Created                 10s (x2 over 46s)  kubelet, k8s-linuxpool1-16822035-0  Created container
  Normal   Started                 10s (x2 over 46s)  kubelet, k8s-linuxpool1-16822035-0  Started container
  Warning  FailedPostStartHook     10s (x2 over 46s)  kubelet, k8s-linuxpool1-16822035-0  Exec lifecycle hook ([/bin/sh -c cp -Lrf /bootstrap/* /artifactory_extra_conf/;
]) for Container "artifactory" in Pod "artifactory-artifactory-0_monitoring(da526df6-d028-11e8-8166-000d3a4e189f)" failed - error: command '/bin/sh -c cp -Lrf /bootstrap/* /artifactory_extra_conf/;
' exited with 1: cp: cannot create regular file '/artifactory_extra_conf/artifactory.config.import.xml': Permission denied
cp: cannot create regular file '/artifactory_extra_conf/security.import.xml': Permission denied
, message: "cp: cannot create regular file '/artifactory_extra_conf/artifactory.config.import.xml': Permission denied\ncp: cannot create regular file '/artifactory_extra_conf/security.import.xml': Permission denied\n"

What you expected to happen: The copy to be successful.

How to reproduce it (as minimally and precisely as possible): Simply try to add in a custom configmap.

Anything else we need to know: N/A

Support encrypted artifactory.config.import.xml file

Is this a request for help?: Yes


Is this a BUG REPORT or FEATURE REQUEST? (choose one): Feature Request

Version of Helm and Kubernetes:

Kubernetes: v1.10.6
Helm: v2.9.1

Which chart: Artifactory

What happened:

I was hoping we could extend the chart to support encrypted configuration. I discovered this by realizing there was no configuration for the artifactory.key under the /var/opt/jfrog/artifctory/etc/security folder. So was why my custom configmap always failed. I have been resorting to:

kubectl cp artifactory.key monitoring/artifactory-artifactory-0:/var/opt/jfrog/artifactory/etc/security/

What you expected to happen: Configuration to be added to support encrypted configmaps.

How to reproduce it (as minimally and precisely as possible): N/A

Anything else we need to know: N/A

Add documentation for AUTO_UPDATE_CONFIG feature of nginx

Is this a request for help?:
Yes

Is this a BUG REPORT or FEATURE REQUEST? (choose one):

Version of Helm and Kubernetes:
Helm 2.9.1 K8S 1.9.7

Which chart:
artifactory, artifactory-ha

What happened:
SKIP_AUTO_UPDATE_CONFIG is not documented

What you expected to happen:
Provide documentation on how to set HTTP setting to use AUTO_UPDATE_CONFIG feature of nginx

How to reproduce it (as minimally and precisely as possible):

Anything else we need to know:

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.