Git Product home page Git Product logo

charts's Introduction

BannerHelp

Verdaccio stands for peace, stop the war, we will be yellow / blue 🇺🇦 until that happens.

verdaccio logo

verdaccio gif

Version Next (Development branch)

Looking for Verdaccio 5 version? Check the branch 5.x The plugins for the v5.x that are hosted within this organization are located at the verdaccio/monorepo repository, while for the next version are hosted on this project ./packages/plugins, keep on mind next plugins will eventually would be incompatible with v5.x versions. Note that contributing guidelines might be different based on the branch.

Verdaccio is a simple, zero-config-required local private npm registry. No need for an entire database just to get started! Verdaccio comes out of the box with its own tiny database, and the ability to proxy other registries (eg. npmjs.org), caching the downloaded modules along the way. For those looking to extend their storage capabilities, Verdaccio supports various community-made plugins to hook into services such as Amazon's s3, Google Cloud Storage or create your own plugin.

verdaccio (latest) verdaccio (downloads) docker pulls backers stackshare

discord MIT Crowdin

Github StandWithUkraine

Install

Latest Node.js v16 required

Install with npm:

npm install -g verdaccio@next

With yarn

yarn global add verdaccio@next

With pnpm

pnpm i -g verdaccio@next

or

docker pull verdaccio/verdaccio:nightly-master

or with helm official chart.

helm repo add verdaccio https://charts.verdaccio.org
helm repo update
helm install verdaccio/verdaccio

Furthermore, you can read the Debugging Guidelines and the Docker Examples for more advanced development.

Plugins

You can develop your own plugins with the verdaccio generator. Installing Yeoman is required.

npm install -g yo
npm install -g generator-verdaccio-plugin

Learn more here how to develop plugins. Share your plugins with the community.

Integration Tests

In our compatibility testing project, we're dedicated to ensuring that your favorite commands work seamlessly across different versions of npm, pnpm, and Yarn. From publishing packages to managing dependencies. Our goal is to give you the confidence to use your preferred package manager without any issues. So dive in, check out our matrix, and see how your commands fare across the board!

Learn or contribute here

Commands

cmd npm6 npm7 npm8 npm9 npm10 pnpm8 pnpm9 (beta) yarn1 yarn2 yarn3 yarn4
publish
info
audit
install
deprecate
ping
search
star
stars
dist-tag

Donations

Verdaccio is run by volunteers; nobody is working full-time on it. If you find this project to be useful and would like to support its development, consider doing a long support donation - and your logo will be on this section of the readme.

Donate 💵👍🏻 starting from $1/month or just one single contribution.

What does Verdaccio do for me?

Use private packages

If you want to use all benefits of npm package system in your company without sending all code to the public, and use your private packages just as easy as public ones.

Cache npmjs.org registry

If you have more than one server you want to install packages on, you might want to use this to decrease latency (presumably "slow" npmjs.org will be connected to only once per package/version) and provide limited failover (if npmjs.org is down, we might still find something useful in the cache) or avoid issues like How one developer just broke Node, Babel and thousands of projects in 11 lines of JavaScript, Many packages suddenly disappeared or Registry returns 404 for a package I have installed before.

Link multiple registries

If you use multiples registries in your organization and need to fetch packages from multiple sources in one single project you might take advance of the uplinks feature with Verdaccio, chaining multiple registries and fetching from one single endpoint.

Override public packages

If you want to use a modified version of some 3rd-party package (for example, you found a bug, but maintainer didn't accept pull request yet), you can publish your version locally under the same name. See in detail here.

E2E Testing

Verdaccio has proved to be a lightweight registry that can be booted in a couple of seconds, fast enough for any CI. Many open source projects use Verdaccio for end to end testing, to mention some examples, create-react-app, mozilla neutrino, pnpm, storybook, babel.js, angular-cli or docusaurus. You can read more in here.

Furthermore, here few examples how to start:

Watch our Videos

Node 2022, February 2022, Online Free

You might want to check out as well our previous talks:

Get Started

Run in your terminal

verdaccio

You would need set some npm configuration, this is optional.

npm set registry http://localhost:4873/

For one-off commands or to avoid setting the registry globally:

NPM_CONFIG_REGISTRY=http://localhost:4873 npm i

Now you can navigate to http://localhost:4873/ where your local packages will be listed and can be searched.

Warning: Verdaccio does not currently support PM2's cluster mode, running it with cluster mode may cause unknown behavior.

Publishing

1. create a user and log in

npm adduser --registry http://localhost:4873

if you use HTTPS, add an appropriate CA information ("null" means get CA list from OS)

npm set ca null

2. publish your package

npm publish --registry http://localhost:4873

This will prompt you for user credentials which will be saved on the verdaccio server.

Docker

Below are the most commonly needed information, every aspect of Docker and verdaccio is documented separately

docker pull verdaccio/verdaccio:nightly-master

Available as tags.

Running Verdaccio using Docker

To run the docker container:

docker run -it --rm --name verdaccio -p 4873:4873 verdaccio/verdaccio

Docker examples are available in this repository.

Compatibility

Verdaccio aims to support all features of a standard npm client that make sense to support in a private repository. Unfortunately, it isn't always possible.

Basic features

  • Installing packages (npm install, npm update, etc.) - supported
  • Publishing packages (npm publish) - supported

Advanced package control

  • Unpublishing packages (npm unpublish) - supported
  • Tagging (npm dist-tag) - supported
  • Deprecation (npm deprecate) - supported

User management

  • Registering new users (npm adduser {newuser}) - supported
  • Change password (npm profile set password) - supported
  • Transferring ownership (npm owner) - supported
  • Token (npm token) - supported

Miscellaneous

  • Searching (npm search) - supported (cli / browser)
  • Ping (npm ping) - supported
  • Starring (npm star, npm unstar, npm stars) - supported

Security

  • Audit (npm/yarn audit) - supported

Report a vulnerability

If you want to report a security vulnerability, please follow the steps which we have defined for you in our security policy.

Special Thanks

Thanks to the following companies to help us to achieve our goals providing free open source licenses. Every company provides enough resources to move this project forward.

Company Logo License
JetBrains jetbrain JetBrains provides licenses for products for active maintainers, renewable yearly
Crowdin crowdin Crowdin provides platform for translations
BrowserStack browserstack BrowserStack provides plan to run End to End testing for the UI
Netlify netlify Netlify provides pro plan for website deployment
Algolia algolia Algolia provides search services for the website
Docker docker Docker offers unlimited pulls and unlimited egress to any and all users

Maintainers

Juan Picado Ayush Sharma Sergio Hg
jotadeveloper ayusharma sergiohgz
@jotadeveloper @ayusharma_ @sergiohgz
Priscila Oliveria Daniel Ruf
priscilawebdev DanielRuf
@priscilawebdev @DanielRufde

You can find and chat with them over Discord, click here or follow them at Twitter.

Who is using Verdaccio?

🤓 Don't be shy, add yourself to this readme.

Open Collective Sponsors

Support this project by becoming a sponsor. Your logo will show up here with a link to your website. [Become a sponsor]

sponsor sponsor sponsor sponsor sponsor sponsor sponsor sponsor sponsor sponsor

Open Collective Backers

Thank you to all our backers! 🙏 [Become a backer]

backers

Contributors

This project exists thanks to all the people who contribute. [Contribute].

contributors

FAQ / Contact / Troubleshoot

If you have any issue you can try the following options. Do no hesitate to ask or check our issues database. Perhaps someone has asked already what you are looking for.

License

Verdaccio is MIT licensed

The Verdaccio documentation and logos (excluding /thanks, e.g., .md, .png, .sketch) files within the /assets folder) is Creative Commons licensed.

charts's People

Contributors

alexis974 avatar andre161292 avatar arcticsnowman avatar arthur2308 avatar byceee avatar casserlyprogramming avatar dependabot[bot] avatar dogrod avatar duboisph avatar greshilov avatar ivasan7 avatar jfoechsler avatar jfrancisco0 avatar jhonmike avatar juanpicado avatar kav avatar kimxogus avatar ksrt12 avatar loicgombeaud avatar machadovilaca avatar marekaf avatar maslow avatar pohldk avatar pshanoop avatar rblaine95 avatar sebastian-x86 avatar todpunk avatar tomashejatko avatar v0112358 avatar zzvara avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

charts's Issues

Enable private repo auth header injection via a Kubernetes Secret

Is your feature request related to a problem?
When deploying Verdaccio in Kubernetes via Helm, I have to embed a basic auth header in the Verdaccio configmap. The Config Map is not a good place to store secrets because it is accessible to most users in a K8s cluster.

Describe the solution you'd like
Instead of using config maps for sensitive issues, Kubernetes has a Secret feature which is more secure. These secrets can be bound to environment variables, which in turn can then be accessed by a process running inside the pod. The specific changes I think will enable this processing is as follows:

  • Add a Helm parameter that is a config map for remote registry auth header values. Something like "remoteAuthHeaders: |-
    corp_artifactory: value1
    npmjs: value2"
  • Update the Helm deployment to save these as secrets, and embed them in the deployed pod as environment variables, prefixed with "AUTH_" and suffixed with the upstream name, i.e. "AUTH_CORP_ARTIFACTORY" and "AUTH_NPMJS" for the above two examples
  • Change Verdaccio so that the code that connects to upstream repos inspects the environment for a variable called AUTH_ $currentUpstreamName, and injects the value from the environment variable as an http Authenticate header in the format "Basic $envVarName"

Describe alternatives you've considered
At the moment I am using Configmaps in Kubernetes. This works but is not secure enough for our auditors.

Additional context
I would like to implement this, just wanted to discuss it here first to make sure it makes sense and there isn't a better alternative before I make the changes.

Read-only error on htpasswd file when addUser

Hi,
I installed successfully verdaccio via the helm chart provided but I'm facing a E500 when I try to run addUser command via terminal.
Checking the pod logs I see:
Error: EROFS: read-only file system, open '/verdaccio/storage/htpasswd'

I'm installing verdaccio as a subchart with these values:

verdaccio:
  configMap: |
    storage: /verdaccio/storage/data

    web:
      title: Verdaccio

    auth:
      htpasswd:
        file: /verdaccio/storage/htpasswd
        algorithm: bcrypt
        rounds: 10

    uplinks:
      npmjs:
        url: https://registry.npmjs.org/
        agent_options:
          keepAlive: true
          maxSockets: 40
          maxFreeSockets: 10

    packages:
      '@company/*':
        access: $authenticated
        publish: $authenticated
        unpublish: $authenticated
      '@*/*':
        access: $authenticated
        publish: $authenticated
        proxy: npmjs
      '**':
        access: $authenticated
        publish: $authenticated
        proxy: npmjs

    log: {type: stdout, format: pretty, level: http}
  secrets:
    htpasswd:
    - username: "verdaccio"
      password: "verdaccio"
  ingress:
    enabled: true
    className: "nginx"
    paths:
      - /
    extraPaths: [ ]
    hosts:
      - verdaccio.my-monorepo.com
    annotations:
      cert-manager.io/cluster-issuer: letsencrypt-prod
    tls:
      - hosts:
          - verdaccio.my-monorepo.com
        secretName: verdaccio-tls

Verdaccio is running correctly at https://verdaccio.my-monorepo.com using certificate from cert-manager.
I'm able to login on the UI with the verdaccio user defined in values above.

When I try to run addUser to add a user I get the error I mentioned above.

Checking the deployment.yaml template file of the chart I saw this:

...
      - mountPath: /verdaccio/storage/htpasswd
        name: htpasswd
        subPath: htpasswd
        readOnly: true
...

I'm guessing if that readOnly:true can be the issue preventing to update the file and add new users.
That property seems not possible to overwrite via values but hardcoded on the code of the template.

Am I missing something?
I'm not that experienced with npm registry and htpasswd. Is there a way to retrieve from the verdaccio user used for the UI the token for the .npmrc file? So that I can use the same user for UI and on .npmrc for publishing packages?

In any case add multiple users to be used in verdaccio seems a must have feature. Any ideas how to solve?

Thanks!

Verdaccio plugins not working

I'm installing verdaccio using helm and I need to install some additional verdaccio plugins notably:
verdaccio-ldap verdaccio-memory verdaccio-auth-memory

I'm doing that with the extraInitContainers in values.yml:

extraInitContainers:
   - name: populate-workdir
     image: verdaccio/verdaccio:5.2.0
     command:
       - 'sh'
       - '-c'
       - 'cp -a /opt/* /tmp/tmpopt/ > /dev/null 2>&1'
     volumeMounts:
     - name: verdoptio
       mountPath: /tmp/tmpopt/
   - name: install-plugins
     image: node:14.18.1-alpine
     command:
       - 'sh'
       - '-c'
       - |
         cd /opt/verdaccio/
         yarn config set enableProgressBars false
         yarn add verdaccio-ldap verdaccio-memory verdaccio-auth-memory
         yarn cache clean
         yarn workspaces focus --production > /dev/null 2>&1
         chown -R root /opt/
         chown 10001 /opt/verdaccio/
     volumeMounts:
     - name: verdoptio
       mountPath: /opt/

Then I have the following persistence defined:

persistence:
  accessMode: ReadWriteOnce
  enabled: true
  mounts:
    - mountPath: /opt/
      name: verdoptio
      readOnly: false
  selector: {}
  size: 8Gi
  volumes:
    - name: verdoptio
      emptyDir: {}

I've modified the verdaccio config.yml configmap as follows:

store:
  memory:
    limit: 1000

However, the pod is stuck in a crashloopbackoff with the error:

# k logs verdaccio-568c6599d9-bjs2p
Defaulted container "verdaccio" out of: verdaccio, populate-workdir (init), install-plugins (init)
 warn --- config file  - /verdaccio/conf/config.yaml
 error--- plugin not found. try npm install verdaccio-memory
(node:8) UnhandledPromiseRejectionWarning: Error:
        verdaccio-memory plugin not found. try "npm install verdaccio-memory"
    at /opt/verdaccio/build/lib/plugin-loader.js:110:13
    at Array.map (<anonymous>)
    at loadPlugin (/opt/verdaccio/build/lib/plugin-loader.js:62:37)
    at LocalStorage._loadStorePlugin (/opt/verdaccio/build/lib/local-storage.js:873:47)
    at LocalStorage._loadStorage (/opt/verdaccio/build/lib/local-storage.js:857:26)
    at new LocalStorage (/opt/verdaccio/build/lib/local-storage.js:49:31)
    at Storage.init (/opt/verdaccio/build/lib/storage.js:64:25)
    at _default (/opt/verdaccio/build/api/index.js:123:17)
    at startVerdaccio (/opt/verdaccio/build/lib/bootstrap.js:66:22)
    at InitCommand.execute (/opt/verdaccio/build/lib/cli/commands/init.js:65:37)
(Use `node --trace-warnings ...` to show where the warning was created)
(node:8) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). To terminate the node process on unhandled promise rejection, use the CLI flag `--unhandled-rejections=strict` (see https://nodejs.org/api/cli.html#cli_unhandled_rejections_mode). (rejection id: 2)
(node:8) [DEP0018] DeprecationWarning: Unhandled promise rejections are deprecated. In the future, promise rejections that are not handled will terminate the Node.js process with a non-zero exit code.

So it seems like the verdaccio-memory plugin isn't being found but I am not sure why. If I run grep on package.json I do see it listed:

grep  grep package.json -e "verdaccio-memory"
    "verdaccio-memory": "10.3.0"

Can anyone help me understand why this isn't working?

Thanks
Brad

Unable to change size with persistence.existingClaim

Issue

I am trying to achieve the backup and restore system for my verdaccio.
To create the backup, I created the snapshot of the persistent disk attached to the persistentVolume created by helm chart.
And to restore it, I created the persistent disk from my snapshot. and I created the persistenceVolume and persistenceVolumeClaim attaching newly created disk to it and upgraded the release with
helm upgrade <release_name> verdaccio/verdaccio --install --set persistence.exisitingClaim=<pv_with_restored_disk>
But now I'm not able to resize the existing PVC on passing persistence.size.

How to reproduce

Create new PersistenceVolumeClaim

cat <<EOF | kubectl create -f -
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  labels:
    app: verdaccio-verdaccio
  name: new-verdaccio-claim
spec:
  accessModes:
    - "ReadWriteOnce"
  resources:
    requests:
      storage: "8Gi"
EOF

Attach to release and change the size

$ helm upgrade verdaccio verdaccio/verdaccio --install --set persistence.existingClaim=new-verdaccio-claim --set persistence.size=32Gi

Expected

PVC should resize.

Custom domain for charts

I'd prefer to have a custom domain instead github.io. Ideas?

We have 2 domains, verdaccio.dev and verdaccio.org.

My bet:

  • charts.verdaccio.org
  • helm.verdaccio.org

... my point is I don't want to be strictly tight with GitHub infrastructure from the very beginning. I want to be able to move charts to another place in the future without force users to update helm the repo URL.

@verdaccio/kubernetes @verdaccio/collaborators thoughts on this?

Pod status 'Pending'

Hi,

I tried to install using helm chart following the documentation:

sudo microk8s helm repo add verdaccio https://charts.verdaccio.org
sudo microk8s helm repo update
sudo microk8s kubectl create configmap verdaccio-config --from-file ./conf/config.yaml -n exipnus
sudo microk8s helm install npm --set existingConfigMap=verdaccio-config verdaccio/verdaccio -n exipnus

sudo microk8s kubectl describe pod/npm-verdaccio-f768bb697-jfqfn

and I get:

image

when describing the pod:

Name:             npm-verdaccio-f768bb697-jfqfn
Namespace:        exipnus
Priority:         0
Service Account:  default
Node:             <none>
Labels:           app=npm-verdaccio
                  app.kubernetes.io/instance=npm
                  app.kubernetes.io/managed-by=Helm
                  app.kubernetes.io/name=verdaccio
                  app.kubernetes.io/version=5.15.4
                  helm.sh/chart=verdaccio-4.9.2
                  pod-template-hash=f768bb697
Annotations:      checksum/config: 12f9d201dff56809aa159eaf93d6848e5ea7fa7205bfbbcf1b2cc303c4b7e5de
                  checksum/htpasswd-secret: 4f53cda18c2baa0c0354bb5f9a3ecbe5ed12ab4d8e11ba873c2f11161202b945
Status:           Pending
IP:               
IPs:              <none>
Controlled By:    ReplicaSet/npm-verdaccio-f768bb697
Containers:
  verdaccio:
    Image:        verdaccio/verdaccio:5.15.4
    Port:         4873/TCP
    Host Port:    0/TCP
    Liveness:     http-get http://:http/-/ping delay=0s timeout=1s period=10s #success=1 #failure=3
    Readiness:    http-get http://:http/-/ping delay=0s timeout=1s period=10s #success=1 #failure=3
    Environment:  <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-24spz (ro)
      /verdaccio/conf from config (ro)
      /verdaccio/storage from storage (rw)
Conditions:
  Type           Status
  PodScheduled   False 
Volumes:
  config:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      verdaccio-config
    Optional:  false
  storage:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  npm-verdaccio
    ReadOnly:   false
  kube-api-access-24spz:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason            Age   From               Message
  ----     ------            ----  ----               -------
  Warning  FailedScheduling  35s   default-scheduler  0/1 nodes are available: 1 pod has unbound immediate PersistentVolumeClaims. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

This is my config file:

storage: /verdaccio/storage/data

priorityClass:
  enabled: true
  name: "high-priority"

persistence:
  enabled: true
  storageClass: ""

web:
  title: Verdaccio
  showSettings: false
  showFooter: false

auth:
  htpasswd:
    file: /verdaccio/storage/htpasswd
    max_users: -1

uplinks:
  npmjs:
    url: https://registry.npmjs.org/

packages:
  '@*/*':
    access: $authenticated
    publish: $authenticated
    unpublish: $authenticated
    proxy: npmjs

  '**':
    access: $authenticated
    publish: $authenticated
    unpublish: $authenticated
    proxy: npmjs
server:
  keepAliveTimeout: 60

middlewares:
  audit:
    enabled: true

log: { type: stdout, format: pretty, level: http }

I also tried without the following

priorityClass:
  enabled: true
  name: "high-priority"

persistence:
  enabled: true
  storageClass: ""

Or any combination of them.

Below is the output of the following command:

sudo microk8s kubectl describe deployment.apps/npm-verdaccio -n exipnus
Name:               npm-verdaccio
Namespace:          exipnus
CreationTimestamp:  Mon, 03 Oct 2022 10:05:22 +0200
Labels:             app=npm-verdaccio
                    app.kubernetes.io/instance=npm
                    app.kubernetes.io/managed-by=Helm
                    app.kubernetes.io/name=verdaccio
                    app.kubernetes.io/version=5.15.4
                    helm.sh/chart=verdaccio-4.9.2
Annotations:        deployment.kubernetes.io/revision: 1
                    meta.helm.sh/release-name: npm
                    meta.helm.sh/release-namespace: exipnus
Selector:           app.kubernetes.io/instance=npm,app.kubernetes.io/name=verdaccio
Replicas:           1 desired | 1 updated | 1 total | 0 available | 1 unavailable
StrategyType:       Recreate
MinReadySeconds:    0
Pod Template:
  Labels:           app=npm-verdaccio
                    app.kubernetes.io/instance=npm
                    app.kubernetes.io/managed-by=Helm
                    app.kubernetes.io/name=verdaccio
                    app.kubernetes.io/version=5.15.4
                    helm.sh/chart=verdaccio-4.9.2
  Annotations:      checksum/config: 12f9d201dff56809aa159eaf93d6848e5ea7fa7205bfbbcf1b2cc303c4b7e5de
                    checksum/htpasswd-secret: 4f53cda18c2baa0c0354bb5f9a3ecbe5ed12ab4d8e11ba873c2f11161202b945
  Service Account:  default
  Containers:
   verdaccio:
    Image:        verdaccio/verdaccio:5.15.4
    Port:         4873/TCP
    Host Port:    0/TCP
    Liveness:     http-get http://:http/-/ping delay=0s timeout=1s period=10s #success=1 #failure=3
    Readiness:    http-get http://:http/-/ping delay=0s timeout=1s period=10s #success=1 #failure=3
    Environment:  <none>
    Mounts:
      /verdaccio/conf from config (ro)
      /verdaccio/storage from storage (rw)
  Volumes:
   config:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      verdaccio-config
    Optional:  false
   storage:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  npm-verdaccio
    ReadOnly:   false
Conditions:
  Type           Status  Reason
  ----           ------  ------
  Available      False   MinimumReplicasUnavailable
  Progressing    True    ReplicaSetUpdated
OldReplicaSets:  <none>
NewReplicaSet:   npm-verdaccio-f768bb697 (1/1 replicas created)
Events:
  Type    Reason             Age   From                   Message
  ----    ------             ----  ----                   -------
  Normal  ScalingReplicaSet  21s   deployment-controller  Scaled up replica set npm-verdaccio-f768bb697 to 1

It seems that no PersistentVolume is created.

Am I doing something wrong?

Thank you

podLabels in values breaks chart

Hello

I have my-values.yaml

podLabels:
  foo: bar

helm template verdaccio/verdaccio --values my-values.yaml

fires error

Error: YAML parse error on verdaccio/templates/deployment.yaml: error converting YAML to JSON: yaml: line 33: mapping values are not allowed in this context

When I run with --debug I see that

---
# Source: verdaccio/templates/deployment.yaml
#....
      labels:
        helm.sh/chart: verdaccio-4.6.1
        app.kubernetes.io/name: verdaccio
        app.kubernetes.io/instance: RELEASE-NAME
        app.kubernetes.io/version: "5.5.0"
        app.kubernetes.io/managed-by: Helm
        app: RELEASE-NAME-verdacciofoo: bar

You can seen that there are no EOL after "app: RELEASE-NAME-verdaccio"

dirty hack is adding {{printf "" }} in the end of {{- define "verdaccio.labels" -}}

maybe there are more correct fix

{{- define "verdaccio.labels" -}}
helm.sh/chart: {{ include "verdaccio.chart" . }}
{{ include "verdaccio.selectorLabels" . }}
{{- if .Chart.AppVersion }}
app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}
{{- end }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
app: {{ include "verdaccio.fullname" . }}
{{printf "" }}
{{- end }}

Failed to create Ingress when migrating from 2.0.1 to 2.1.0 chart

I tried to upgrade from version 2.0.1 to version 2.1.0 but fails with error:

Error: UPGRADE FAILED: an error occurred while rolling back the release. original upgrade error: failed to create resource: admission webhook "validate.nginx.ingress.kubernetes.io" denied the request: host "npm.example.com" and path "/" is already defined in ingress default/verdaccio-verdaccio: failed to create resource: admission webhook "validate.nginx.ingress.kubernetes.io" denied the request: host "npm.example.com" and path "/" is already defined in ingress default/verdaccio-verdaccio

Why does it fail for the existing resource? The ingress was created when chart was deployed for the first time.

Frequent pod restarts for verdaccio

We are seeing frequent pod restarts when building our software project.

❱ kubectl get pods -n verdaccio
NAME                                                   READY   STATUS    RESTARTS   AGE
verdaccio-7855f98ff7-2drjp           1/1     Running   1          20h
verdaccio-7855f98ff7-l4lj8           1/1     Running   2          20h
verdaccio-7855f98ff7-wgtg2           1/1     Running   0          20h
verdaccio-sandbox-5d6959b689-z48zq   1/1     Running   0          20h
❱

There aren't any logs associated with the pod restarts and in the pod events we see that the pods are restarted due to liveness probe failures:

Events:
  Type     Reason     Age                 From     Message
  ----     ------     ----                ----     -------
  Warning  Unhealthy  52m (x16 over 20h)  kubelet  Liveness probe failed: Get "http://172.31.10.158:4873/-/ping": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
  Warning  Unhealthy  52m (x16 over 20h)  kubelet  Readiness probe failed: Get "http://172.31.10.158:4873/-/ping": context deadline exceeded (Client.Timeout exceeded while awaiting headers)

The only significant difference from our values file is how we handled the packages section in the config:

      packages:
        '@kpn/*':
          access: $authenticated
          publish: <usermane>
          unpublish: <username>
        '**':
          access: $all
          proxy: npmjs

We also tried vertically and horizontally scaling up the pods to no avail:

    {{- if eq (requiredEnv "ENVIRONMENT") "prod" }}
    resources:
      requests:
        cpu: 250m
        memory: 256Mi
      limits:
        cpu: 1000m
        memory: 1Gi
    replicaCount: 3
    {{- end }}

These restarts wouldn't be a hassle if they didn't interrupt the build process:

npm ERR! code ETIMEDOUT
npm ERR! syscall connect
npm ERR! errno ETIMEDOUT
npm ERR! network request to http://172.31.46.209:32734/@types%2fnode/-/node-12.20.43.tgz failed, reason: connect ETIMEDOUT 172.31.46.209:32734
npm ERR! network This is a problem related to network connectivity.
npm ERR! network In most cases you are behind a proxy or have bad network settings.

customConfigMap support

Hello, there is customConfigMap parameter in documentation, but it is not used anywhere.

I will make PR with implementation of this parameter. If this is some obsolete stuff and should be removed, then feel free to reject my PR, but then I will suggest to remove this from documentation. Thanks!

Can't seem to make it work with existing ingress in GKE

I'm trying to get Verdaccio running in an existing cluster, which is working for other services e.g. a pypiserver with the following service yaml (this one works in my cluster, note ClusterIP is not defined in my local service yaml, it's being filled in when the service is created with kubectl apply):

spec:
  clusterIP: 10.4.13.73
  ports:
  - port: 80
    protocol: TCP
    targetPort: 8080
  selector:
    app: pypiserver
  sessionAffinity: None
  type: ClusterIP

And, when I look at the service page for this one, it shows port of 80 and target port of 8080.

But, Verdaccio installed with helm, while it can be reached if I setup a local port forward, isn't working with the ingress on the public IP. The verdaccio yaml in GKE looks like (this service does not work, despite looking, to me very similar to the above):

spec:
  clusterIP: 10.4.6.96
  ports:
  - port: 4873
    protocol: TCP
    targetPort: http
  selector:
    app: verdaccio
    release: npmserver
  sessionAffinity: None
  type: ClusterIP

When I look at the service page for this one, it shows port of 4873 and target port of 0, so that feels like maybe a problem, somehow. I don't see any way to explicitly set targetPort (and it seems like that ought to be automagic since helm is setting up the service and knows where it runs better than I do, anyway). I think I'm misunderstanding something, but I can't find any clues for what after pretty extensive googling. I'm still pretty new to Kubernetes, though, so it may be obvious to someone else what I'm doing wrong.

My values.yaml contains:

service:
  annotations:
  clusterIP: ""
  externalIPs:
  loadBalancerIP: ""
  loadBalancerSourceRanges: []
  port: 4873
  type: ClusterIP
  # nodePort: 31873

I've also tried explicitly setting the externalIPs and loadBalancerIP, but that didn't seem to work either, and those aren't specified for my other services that are working, AFAICS, so I don't think they should be needed here, either?

Anybody have a clue they can lend me for how this is supposed to be configured with a GKE Ingress?

Support for npm token API

In the v5 migration guide npm token is mentioned that it is still supported but had some updates.

However, when I try to use it against my installation done via this Helm Chart, I get the following

r$ npm token
npm ERR! code E404
npm ERR! 404 Not Found - GET https://registry-staging.example.com/-/npm/v1/tokens - File not found
$ npm token create
npm password: 
npm ERR! code E404
npm ERR! 404 Not Found - POST https://registry-staging.example.com/-/npm/v1/tokens

Is something in this chart not configured to support this v1/tokens API or is there some configuration that I need to add for it?

Verdaccio Pod fails to start up when the Helm release is named "verdaccio"

Verdaccio Pod fails to start up when the Helm release is named "verdaccio".

Then, Kubernetes will create a service named "verdaccio", which will create an environment variable VERDACCIO_PORT in the Pod itself. That probably conflicts with a Verdaccio environment variable, thus, corrupting the startup of the Verdaccio service.

we expect a port (e.g. "4873"), host:port (e.g. "localhost:4873") or full url (e.g. "http://localhost:4873/")

Old appVersion in latest chart

The current latest version of the verdaccio chart defaults to a version with various bugs in it

This is a little unexpected when trying verdaccio for the first time it comes up out of the box with various issues

Problem with basic auth, repeats endlessly

I've configured a basic auth (https://kubernetes.github.io/ingress-nginx/examples/auth/basic/), which prompts for credentials, then loads the page but immediately halts with a loading spinner and asks again for credentials.

A click on "Sign in" just shows the basic auth prompt again. This repeats without end. If I click on "Cancel" after the first basic auth the page loads, but I don't see any packages. If I remove the basic auth, I can see all packages.

Can anybody confirm this?

How to install plugins is not documented? (scroll down👇)

Hi,

I was browsing both README and Values.yaml and can't find a way to install plugins to Verdaccio using the helm chart.

I really was hoping to get it to work some way without having to rebuild a container image with the plugins I require.

Asking this because I need a running Verdaccio with both ldap and gcloud storage plugins (or minio storage).

Chart not working on package request

After deploying the chart correctly, when I try to fetch some package, log shows this error. It cannot retrieve the requested package.

error--- unexpected error: EROFS: read-only file system, mkdir '/verdaccio/conf/storage'

I think the error is here. It is making the entire folder read-only, when it should be only the config.yaml file.

- mountPath: /verdaccio/conf
name: config
readOnly: true

This is the entire deployment yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  annotations:
    deployment.kubernetes.io/revision: "2"
    field.cattle.io/publicEndpoints: '[{"addresses":["116.203.86.132"],"port":443,"protocol":"HTTPS","serviceName":"dev:verdaccio-registry","ingressName":"dev:verdaccio-registry","hostname":"registry.dev.zondax.net","path":"/","allNodes":true}]'
    meta.helm.sh/release-name: verdaccio-registry
    meta.helm.sh/release-namespace: dev
    objectset.rio.cattle.io/id: default-test-verdaccio
  creationTimestamp: "2021-12-22T17:26:43Z"
  generation: 3
  labels:
    app: verdaccio-registry
    app.kubernetes.io/instance: verdaccio-registry
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: verdaccio
    app.kubernetes.io/version: 5.2.0
    helm.sh/chart: verdaccio-4.5.0
    objectset.rio.cattle.io/hash: 970e3431e667bac14c309fdcb87456611ae44404
  name: verdaccio-registry
  namespace: dev
  resourceVersion: "152387111"
  uid: 0e73554c-3f24-4467-b978-31e17f89c053
spec:
  progressDeadlineSeconds: 600
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app.kubernetes.io/instance: verdaccio-registry
      app.kubernetes.io/name: verdaccio
  strategy:
    type: Recreate
  template:
    metadata:
      annotations:
        checksum/config: b05384757319d631c331eca55bdf9e233239199683fff781ac9f35ddc334edd0
        checksum/htpasswd-secret: 4f53cda18c2baa0c0354bb5f9a3ecbe5ed12ab4d8e11ba873c2f11161202b945
      creationTimestamp: null
      labels:
        app: verdaccio-registry
        app.kubernetes.io/instance: verdaccio-registry
        app.kubernetes.io/managed-by: Helm
        app.kubernetes.io/name: verdaccio
        app.kubernetes.io/version: 5.2.0
        helm.sh/chart: verdaccio-4.5.0
    spec:
      containers:
      - image: verdaccio/verdaccio:5.3.2
        imagePullPolicy: IfNotPresent
        livenessProbe:
          failureThreshold: 3
          httpGet:
            path: /-/ping
            port: http
            scheme: HTTP
          initialDelaySeconds: 5
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 1
        name: verdaccio
        ports:
        - containerPort: 4873
          name: http
          protocol: TCP
        readinessProbe:
          failureThreshold: 3
          httpGet:
            path: /-/ping
            port: http
            scheme: HTTP
          initialDelaySeconds: 5
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 1
        resources: {}
        securityContext:
          runAsUser: 10001
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        volumeMounts:
        - mountPath: /verdaccio/storage
          name: storage
        - mountPath: /verdaccio/conf
          name: config
          readOnly: true
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext:
        fsGroup: 101
      serviceAccount: default
      serviceAccountName: default
      terminationGracePeriodSeconds: 30
      volumes:
      - configMap:
          defaultMode: 420
          name: verdaccio-registry
        name: config
      - name: storage
        persistentVolumeClaim:
          claimName: verdaccio-registry
status:
  availableReplicas: 1
  conditions:
  - lastTransitionTime: "2021-12-22T17:38:46Z"
    lastUpdateTime: "2021-12-22T17:38:46Z"
    message: Deployment has minimum availability.
    reason: MinimumReplicasAvailable
    status: "True"
    type: Available
  - lastTransitionTime: "2021-12-22T17:26:43Z"
    lastUpdateTime: "2021-12-22T17:38:46Z"
    message: ReplicaSet "verdaccio-registry-5f8cb496dd" has successfully progressed.
    reason: NewReplicaSetAvailable
    status: "True"
    type: Progressing
  observedGeneration: 3
  readyReplicas: 1
  replicas: 1
  updatedReplicas: 1

GKE: can't get things to work when ingress: enabled: true

Similar to #48, I'm having a lot of problems with getting this to work with GKE (1.18.16-gke.2100). However, I'm starting with a brand new cluster with no existing ingresses.

If I take https://github.com/verdaccio/charts/blob/master/charts/verdaccio/values.yaml and copy it locally and only change the following

ingress:
  enabled: true

Then run

helm install npm -f values.yaml verdaccio/verdaccio

I get

NAME: npm
LAST DEPLOYED: Thu May  6 00:41:02 2021
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
1. Get the application URL by running these commands:

Then in my pod, I start to see the following:

warn --- invalid address - http://0.0.0.0:tcp://xx.xxx.xxx.xx:4873, we expect a port (e.g. "4873"),
host:port (e.g. "localhost:4873") or full url (e.g. "http://localhost:4873/")

In the service UI, I see this

Screen Shot 2021-05-06 at 00 50 26

If I leave ingress enabled set to false, everything spins up fine, but I can't get to it using an external IP.

Crash loop in Kubernetes with Helm

Your Environment

  • verdaccio version: 5.5.0
  • node version: Using docker image
  • package manager: Using docker image
  • os: Usign docker image
  • platform: Google cloud + kubernetes + helm

Describe the bug

After deploying to GKE with Helm, it will go into a crash loop. The logs show this as the last log line:

warn --- invalid address - http://0.0.0.0:tcp://10.116.12.227:4873, we expect a port (e.g. "4873"), host:port (e.g. "localhost:4873") or full url (e.g. "http://localhost:4873/")

To Reproduce

  • Deploy to GKE using Helm without any changes

I can also reproduce with:

kubectl run test --image=verdaccio/verdaccio:5.5.0 -i --tty --rm
If you don't see a command prompt, try pressing enter.
 warn --- config file  - /verdaccio/conf/config.yaml
 warn --- Plugin successfully loaded: verdaccio-htpasswd
 warn --- Plugin successfully loaded: verdaccio-audit
 warn --- invalid address - http://0.0.0.0:tcp://10.116.12.227:4873, we expect a port (e.g. "4873"), host:port (e.g. "localhost:4873") or full url (e.g. "http://localhost:4873/")

Expected behavior

  • No crash loop

Screenshots, server logs, package manager log

Added log

Configuration File (cat ~/.config/verdaccio/config.yaml)

Config file is from helm, no changes

Environment information

NA

Debugging output

NA

Contribute to Verdaccio

  • I'm willing to fix this bug 🥇

Need a way to change the ingress pathtype

ingress: enabled: true className: "" pathType: "ImplementationSpecific" paths: - /*

This does not work and is ignored, the web UI does not work at all without this in GKE. I have to manually change this after deployment. This can change back at will to the original value(prefix), this is not suitable for production. Can we add this option to change the pathtype in the values.yaml?

#53 made breaking changes

#53 made helm upgrade(without changes in values.yaml) errors like below

ValidationError(Deployment.spec.template.spec.containers[0].securityContext): unknown field "enabled" in io.k8s.api.core.v1.SecurityContext
ValidationError(Deployment.spec.template.spec.containers[0].securityContext): unknown field "fsGroup" in io.k8s.api.core.v1.SecurityContext
cannot patch "npm-registry-verdaccio" with kind Deployment: Deployment.apps "npm-registry-verdaccio" is invalid: spec.selector: Invalid value: v1.LabelSelector{MatchLabels:map[string]string{"app.kubernetes.io/instance":"npm-registry", "app.kubernetes.io/name":"verdaccio"}, MatchExpressions:[]v1.LabelSelectorRequirement(nil)}: field is immutable

It looks #53 should have made a major version bump, not a minor version bump.

Mistakes in after-install NOTES

NAME: npm
LAST DEPLOYED: Mon Apr  5 13:00:12 2021
NAMESPACE: api
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
1. Get the application URL by running these commands:
  export POD_NAME=$(kubectl get pods --namespace api -l "app=verdaccio,release=npm" -o jsonpath="{.items[0].metadata.name}")
  kubectl port-forward --namespace api $POD_NAME 8080:4873
  echo "Visit http://127.0.0.1:8080 to use your application"

-l "app=verdaccio,release=npm" should be -l "app=npm-verdaccio"

pod STATUS is CrashLoopBackOff

After we helm install it, the pod STATUS changed from run to Completed, and then to CrashLoopBackOff

kubectl get pod
NAME READY STATUS RESTARTS AGE
verdaccio-fd6cc49c4-jw5tk 0/1 CrashLoopBackOff 1 26s

kubectl logs -f verdaccio-fd6cc49c4-jw5tk

warn --- config file  - /verdaccio/conf/config.yaml
 warn --- Plugin successfully loaded: verdaccio-htpasswd
 warn --- Plugin successfully loaded: verdaccio-audit
 warn --- invalid address - http://0.0.0.0:tcp://10.230.80.202:4873, we expect a port (e.g. "4873"), host:port (e.g. "localhost:4873") or full url (e.g. "http://localhost:4873/")

Documentation for Ingress HTTPS options

In https://github.com/verdaccio/charts#configuration, there are no entries for these options in values.yaml:

ingress:
  enabled: false
  # Set to true if you are on an old cluster where apiVersion extensions/v1beta1 is required
  useExtensionsApi: false
  paths:
    - /
  # Use this to define, ALB ingress's actions annotation based routing. Ex: for ssl-redirect
  # Ref: https://kubernetes-sigs.github.io/aws-load-balancer-controller/latest/guide/tasks/ssl_redirect/
  extraPaths: []
  hosts:
    - npm.blah.com
# annotations:
#   kubernetes.io/ingress.class: nginx
  tls:
    - secretName: secret
      hosts:
        - npm.blah.com

I'd like to set up TLS, but it's not clear to me how to do this. I provisioned a certificate for my load balancer in Google Cloud, but I'm not sure how to get that hooked up into this config.

Bug: Log level not picked up

Hello! I've tried changing the log level for the Verdaccio Helm installation on my cluster to log much less verbosely, but it still seems to log at info level:

kubectl describe configmap -n kube-system npm-verdaccio
...
# log settings
log: {type: stdout, format: pretty, level: error}

and logs still look like:

http <-- 200, user: null(10.10.46.77), req: 'GET /-/ping', bytes: 0/3 
info <-- 10.10.46.77 requested 'GET /-/ping' 
http <-- 200, user: null(10.10.46.77), req: 'GET /-/ping', bytes: 0/0 
info <-- 10.10.46.77 requested 'GET /-/ping' 
http <-- 200, user: null(10.10.46.77), req: 'GET /-/ping', bytes: 0/0 

I've also tried changing level to warn with the same result. My expectation would be that setting level to warn would make it only output warn and error logs, and level = error would be only error logs. Let me know if I appear to be doing something obviously wrong.

Thanks!

Helm install verdaccio bug

Your Environment

  • verdaccio version: 5.21.1
  • node version [12.x.x, 14.x.x]: none
  • package manager: [npm@7, pnpm@6, yarn@2]
  • os: linux, k8s 1.25.6
  • platform: helm

Describe the bug

image

To Reproduce

Expected behavior

Screenshots, server logs, package manager log

Configuration File (cat ~/.config/verdaccio/config.yaml)

Environment information

Debugging output

  • $ NODE_DEBUG=request verdaccio display request calls (verdaccio <--> uplinks)
  • $ DEBUG=verdaccio* verdaccio enable extreme verdaccio debug mode (verdaccio api)
  • $ npm -ddd prints:
  • $ npm config get registry prints:

Contribute to Verdaccio

  • I'm willing to fix this bug 🥇

Helm install warn --- invalid address

I am using Helm to install Verdaccio on an EKS cluster. There were no errors during installation, but the pod received a Crash loopback error after deployment. I checked the pod's logs, and here is the error message.

image

I am using the same method in GCP and it is working perfectly. Could someone please assist me in resolving this?

Authentication not possible via Ingress URL

I‘ve originally opened this as a discussion, but after further testing figured it might be a Bug. The problem occurs when simply deploying the Helm Chart on an K8S cluster (I.e. Minikube) with ingress set to enabled. If I expose the pod via service I can login/create users as expected. Using the ingress URL always fails with the massage in the discussion below.

Discussed in https://github.com/verdaccio/verdaccio/discussions/2377

Originally posted by TheVanDoom August 13, 2021
I am trying to deploy Verdaccio to my kubernetes cluster to use as shared registry for my other components. My Problem is, that I cannot seem to get it to properly let my authenticate. I use the most recent Helm-Chart for deployment.
The configuration allows no registrations and expects authentication.

    # This is the config file used for the docker images.
    # It allows all users to do anything, so don't use it on production systems.
    #
    # Do not configure host and port under `listen` in this file
    # as it will be ignored when using docker.
    # see https://github.com/verdaccio/verdaccio/blob/master/docs/docker.md#docker-and-custom-port-configuration
    #
    # Look here for more config file examples:
    # https://github.com/verdaccio/verdaccio/tree/master/conf
    #

    # path to a directory with all packages
    storage: /verdaccio/storage/data

    web:
      # WebUI is enabled as default, if you want disable it, just uncomment this line
      #enable: false
      title: DiPlom NPM Registry - Verdaccio

    auth:
      htpasswd:
        file: /verdaccio/storage/htpasswd
        # Maximum amount of users allowed to register, defaults to +infinity.
        # You can set this to -1 to disable registration.
        max_users: -1

    # a list of other known repositories we can talk to
    uplinks:
      npmjs:
        url: https://registry.npmjs.org/

    packages:
      '@*/*':
        # scoped packages
        access: $authenticated
        publish: $authenticated
        # proxy: npmjs

      '**':
        # allow all users (including non-authenticated users) to read and
        # publish all packages
        #
        # you can specify usernames/groupnames (depending on your auth plugin)
        # and three keywords: $all, $anonymous, $authenticated
        access: $authenticated

        # allow all known users to publish packages
        # (anyone can register by default, remember?)
        publish: $authenticated

        # if package is not available locally, proxy requests to 'npmjs' registry
        # proxy: npmjs

    # To use `npm audit` uncomment the following section
    middlewares:
      audit:
        enabled: true

    # log settings
    logs:
      - {type: stdout, format: pretty, level: http}
      #- {type: file, path: verdaccio.log, level: info}

I need to get user-credentials to obtain the corresponding Token so I can push and pull from the registry. However, so far nothing I entered into the HTPasswd file worked. I've looked through the issues on here and a few tips how to generate the HTPasswd entry, but so far no success. When I try to login via npm login, I get the following response:

npm ERR! code E409
npm ERR! 409 Conflict - PUT http://<IngressURL>/-/user/org.couchdb.user:admin - user registration disabled

npm ERR! A complete log of this run can be found in:
npm ERR!     /Users/<USR>/.npm/_logs/2021-08-13T10_23_30_382Z-debug.log

I then tried to configure the server to allow registrations by changing max_users to 1. When running npm adduser I get the very same response. How am I supposed to use Verdaccio if there is no way to authenticate with the service? Or am I missing something?
Thanks.

pod restarts and npm ERR! cb() never called!

ENVIRONMENT:

helm: verdaccio/verdaccio: 4.8.1

 > k get pods -n marketplace
NAME                                    READY   STATUS    RESTARTS   AGE
verdaccio-95887d8-ddwerm2   1/1     Running   71         114d
  

Noticing frequent pods restart with below logs, we are running this as a cron job in argo worflow it exits with below logs in the first attempts and manages to run successfully in the second attempt.

Container events:

time="2022-05-09T02:30:04.616Z" level=info msg="capturing logs" argo=true
argopm install <package>

npm WARN using --force I sure hope you know what you are doing.
npm ERR! cb() never called!

npm ERR! This is an error with npm itself. Please report this error at:
npm ERR!     <https://npm.community>

npm ERR! A complete log of this run can be found in:
npm ERR!     /root/.npm/_logs/2022-05-09T02_30_57_963Z-debug.log
Error: exit status 1

Is this due to running only a single replica considering the utilisation as it has high usage or some misconfiguration that is causing this. Liveliness is failing as well with error -

Liveness probe failed: Get "http://10.0.0.229:4873/-/ping": context deadline exceeded (Client.Timeout exceeded while awaiting headers)

Config:

          livenessProbe:
            failureThreshold: 3
            httpGet:
              path: /-/ping
              port: http
              scheme: HTTP
            initialDelaySeconds: 5
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 1
          name: verdaccio
          ports:
          - containerPort: 4873
            name: http
            protocol: TCP
          readinessProbe:
            failureThreshold: 3
            httpGet:
              path: /-/ping
              port: http
              scheme: HTTP
            initialDelaySeconds: 5
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 1

unexpected error: EACCES: permission denied

I upgraded to:

  chart:
    repository: https://charts.verdaccio.org
    name: verdaccio
    version: 0.16.2
  values:
    image:
      tag: 4.8.1

from chart 0.13.0 with no image specified:

  chart:
    repository: https://charts.verdaccio.org
    name: verdaccio
    version: 0.13.0

and suddenly I can't npm publish packages due to permission issues:

│  error--- unexpected error: EACCES: permission denied, open '/verdaccio/storage/data/@bsmartlabs/bsmart-global-style/bsmart-global-style-1.2.1-develop-dbfb6d31.tgz.tmp-6112493514612476'
│ Error: EACCES: permission denied, open '/verdaccio/storage/data/@bsmartlabs/bsmart-global-style/bsmart-global-style-1.2.1-develop-dbfb6d31.tgz.tmp-6112493514612476'

I use efs.
any idea?

/verdaccio/storage/data $ ls -lha
total 3M
drwxr-sr-x  789 100      40051      34.0K Aug  8 00:47 .
drwxrws--x    3 root     40051       6.0K Jun 22 08:19 ..
-rw-r--r--    1 100      40051       1.3K Aug 25 02:32 .verdaccio-db.json
...

attached the relevant portions of my HelmRelease file

---
apiVersion: helm.fluxcd.io/v1
kind: HelmRelease
metadata:
  name: verdaccio
  namespace: verdaccio
spec:
  releaseName: npm
  chart:
    repository: https://charts.verdaccio.org
    name: verdaccio
    version: 0.16.2
  values:
    image:
      tag: 4.8.1
    persistence:
      storageClass: efs
    podAnnotations:
      backup.velero.io/backup-volumes: storage
    configMap: |
      storage: /verdaccio/storage/data
      max_body_size: 500mb
      ...

Allow injection of environment variables from existing secret

In order to securely configure this chart in a GitOps workflow (namely with ArgoCD), I need:

  • to inject secret environment variables into the container,
  • and these secrets not to be stored in plain text in git.

A mechanism allowing for injection of environment variables stored in an externally-defined secret would thus be beneficial.

FYI for context, this is needed to configure the LDAP admin password for the verdaccio-ldap plugin, via the LDAP_ADMIN_PASSWORD environment variable.

Nginx-Ingress using certificate-arn has mixed-content issues

Using an ARN Cert, I seem to have mixed content issues, I've tried a few diffrent annotations to correct this, but so far they all produce the same result; has anyone seen this before? Thanks!

Nginx-Ingress Versions:
v0.48.1
Helm Version:
version.BuildInfo{Version:"v3.6.1", GitCommit:"61d8e8c4a6f95540c15c6a65f36a6dd0a45e7a2f", GitTreeState:"clean", GoVersion:"go1.16.5"}

Error within browser:
Mixed Content: The page at 'https://npm.{url}.com/' was loaded over HTTPS, but requested an insecure favicon 'http://npm.{url}.com/-/static/favicon.ico'. This request has been blocked; the content must be served over HTTPS.

Deployment Pod Logs

 http --- �[33m200�[39m, user: �[1mnull�[22m(192.168.36.66), req: 'GET /-/ping', bytes: �[33m0�[39m/�[33m3�[39m
8
 http --- 192.168.36.66 requested 'GET /-/ping'
7
 http --- �[33m200�[39m, user: �[1mnull�[22m(192.168.36.66), req: 'GET /-/ping', bytes: �[33m0�[39m/�[33m3�[39m
6
 http --- 192.168.36.66 requested 'GET /-/ping'
5
 http --- �[33m200�[39m, user: �[1mnull�[22m(192.168.36.66), req: 'GET /-/ping', bytes: �[33m0�[39m/�[33m3�[39m
4
 http --- 192.168.36.66 requested 'GET /-/ping'
3
 http --- �[33m200�[39m, user: �[1mnull�[22m(192.168.36.66), req: 'GET /-/ping', bytes: �[33m0�[39m/�[33m3�[39m
2
 http --- 192.168.36.66 requested 'GET /-/ping'

Values File:

image:
  repository: verdaccio/verdaccio
  tag: 5.1.1
  pullPolicy: IfNotPresent
  pullSecrets: []
    # - dockerhub-secret

nameOverride: ""
fullnameOverride: ""

service:
  annotations: {}
  clusterIP: ""

  ## List of IP addresses at which the service is available
  ## Ref: https://kubernetes.io/docs/user-guide/services/#external-ips
  ##
  externalIPs: []

  loadBalancerIP: ""
  loadBalancerSourceRanges: []
  port: 4873
  type: ClusterIP
  # nodePort: 31873

## Node labels for pod assignment
## Ref: https://kubernetes.io/docs/user-guide/node-selection/
##
nodeSelector: {}

## Affinity for pod assignment
## Ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#node-affinity
##
affinity: {}

## Tolerations for nodes
tolerations: []

## Additional pod labels
## ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/
##
podLabels: {}

## Additional pod annotations
## ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/
##
podAnnotations: {}

replicaCount: 1

resources: {}
  # requests:
  #   cpu: 100m
  #   memory: 512Mi
  # limits:
  #   cpu: 100m
  #   memory: 512Mi

ingress:
  enabled: true
  # Set to true if you are on an old cluster where apiVersion extensions/v1beta1 is required
  useExtensionsApi: false
  paths:
    - /
  # Use this to define, ALB ingress's actions annotation based routing. Ex: for ssl-redirect
  # Ref: https://kubernetes-sigs.github.io/aws-load-balancer-controller/latest/guide/tasks/ssl_redirect/
  extraPaths: []
  annotations:
    kubernetes.io/ingress.class: nginx
    kubernetes.io/proxy-body-size: 20m
    nginx.ingress.kubernetes.io/certificate-arn: arn:aws:acm:us-east-1:{arn-cert}
    ingress.kubernetes.io/ssl-redirect: "true"
    nginx.ingress.kubernetes.io/ssl-redirect: "true"
    nginx.ingress.kubernetes.io/scheme: internet-facing
    nginx.ingress.kubernetes.io/backend-protocol: "HTTP"
    nginx.ingress.kubernetes.io/add-base-url: "true"
    nginx.ingress.kubernetes.io/secure-backends: "true"
    nginx.ingress.kubernetes.io/ssl-passthrough: "true"
    nginx.ingress.kubernetes.io/enable-cors: "true"
  tls:
  hosts:
  - npm.{url}.com

  
#   - secretName: secret
#     hosts:
#       - npm.blah.com
## Service account
serviceAccount:
  # Specifies whether a service account should be created
  create: false
  # Annotations to add to the service account
  annotations: {}
  # The name of the service account to use.
  # If not set and create is true, a name is generated using the Chart's fullname template
  name: ""

# Extra Environment Values - allows yaml definitions
extraEnvVars:
#  - name: VALUE_FROM_SECRET
#    valueFrom:
#      secretKeyRef:
#        name: secret_name
#        key: secret_key
#  - name: REGULAR_VAR
#    value: ABC

# Extra Init Containers - allows yaml definitions
extraInitContainers: []

configMap: |
  # This is the config file used for the docker images.
  # It allows all users to do anything, so don't use it on production systems.
  #
  # Do not configure host and port under `listen` in this file
  # as it will be ignored when using docker.
  # see https://github.com/verdaccio/verdaccio/blob/master/docs/docker.md#docker-and-custom-port-configuration
  #
  # Look here for more config file examples:
  # https://github.com/verdaccio/verdaccio/tree/master/conf
  #

  # path to a directory with all packages
  storage: /verdaccio/storage/data

  web:
    # WebUI is enabled as default, if you want disable it, just uncomment this line
    #enable: false
    title: Verdaccio

  auth:
    htpasswd:
      # Do not change this path if secrets htpasswd is used.
      file: /verdaccio/storage/htpasswd
      # Maximum amount of users allowed to register, defaults to "+infinity".
      # You can set this to -1 to disable registration.
      #max_users: 1000

  # a list of other known repositories we can talk to
  uplinks:
    npmjs:
      url: https://registry.npmjs.org/
      agent_options:
        keepAlive: true
        maxSockets: 40
        maxFreeSockets: 10

  packages:
    '@*/*':
      # scoped packages
      access: $all
      publish: $authenticated
      proxy: npmjs

    '**':
      # allow all users (including non-authenticated users) to read and
      # publish all packages
      #
      # you can specify usernames/groupnames (depending on your auth plugin)
      # and three keywords: "$all", "$anonymous", "$authenticated"
      access: $all

      # allow all known users to publish packages
      # (anyone can register by default, remember?)
      publish: $authenticated

      # if package is not available locally, proxy requests to 'npmjs' registry
      proxy: npmjs

  # To use `npm audit` uncomment the following section
  middlewares:
    audit:
      enabled: true

  # log settings
  logs: {type: stdout, format: pretty, level: http}
  # logs: {type: file, path: verdaccio.log, level: info}

persistence:
  enabled: true
  ## A manually managed Persistent Volume and Claim
  ## Requires Persistence.Enabled: true
  ## If defined, PVC must be created manually before volume will be bound
  # existingClaim:

  ## Verdaccio data Persistent Volume Storage Class
  ## If defined, storageClassName: <storageClass>
  ## If set to "-", storageClassName: "", which disables dynamic provisioning
  ## If undefined (the default) or set to null, no storageClassName spec is
  ##   set, choosing the default provisioner.  (gp2 on AWS, standard on
  ##   GKE, AWS & OpenStack)
  ##
  # storageClass: "-"

  accessMode: ReadWriteOnce
  size: 8Gi
  ## selector can be used to match an existing PersistentVolume
  ## selector:
  ##   matchLabels:
  ##     app: my-app
  selector: {}

  volumes:
  #  - name: nothing
  #    emptyDir: {}
  mounts:
  # - mountPath: /var/nothing
  #   name: nothing
  #   readOnly: true

podSecurityContext:
  fsGroup: 101
securityContext:
  runAsUser: 10001

priorityClass:
  enabled: false
  # name: ""

existingConfigMap: false

# Secrets
secrets:
  # list of users and password for htpasswd plugin
  # This this is mounted as /verdaccio/auth/htpasswd on pods
  htpasswd: []
  # - username: "test"
  #   password: "test"
  # - username: "blah"
  #   password: "blah"

middlewares:
  oidc-ui:
    enabled: true
 
auth:
  oidc-ui:
    org: REQUIRED_GROUP
    client-id: {OIDC_CLIENT_ID}
    client-secret: {OIDC_CLIENT_SECRET}
    oidc-issuer-url: https://login.{url}.com/auth/realms/{realm}
    oidc-username-property: nickname
    oidc-groups-property: groups
 
url_prefix: https://npm.{url}.com

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.