Git Product home page Git Product logo

charts's Introduction

ChartMuseum

GitHub Actions status Go Report Card GoDoc


ChartMuseum is an open-source Helm Chart Repository server written in Go (Golang), with support for cloud storage backends, including Google Cloud Storage, Amazon S3, Microsoft Azure Blob Storage, Alibaba Cloud OSS Storage, Openstack Object Storage, Oracle Cloud Infrastructure Object Storage, Baidu Cloud BOS Storage, Tencent Cloud Object Storage, DigitalOcean Spaces, Minio, and etcd.

Works as a valid Helm Chart Repository, and also provides an API for uploading charts.

Powered by some great Go technology:

API

Helm Chart Repository

  • GET /index.yaml - retrieved when you run helm repo add chartmuseum http://localhost:8080/
  • GET /charts/mychart-0.1.0.tgz - retrieved when you run helm install chartmuseum/mychart
  • GET /charts/mychart-0.1.0.tgz.prov - retrieved when you run helm install with the --verify flag

Chart Manipulation

  • POST /api/charts - upload a new chart version
  • POST /api/prov - upload a new provenance file
  • DELETE /api/charts/<name>/<version> - delete a chart version (and corresponding provenance file)
  • GET /api/charts - list all charts
  • GET /api/charts/<name> - list all versions of a chart
  • GET /api/charts/<name>/<version> - describe a chart version
  • GET /api/charts/<name>/<version>/templates - get chart template
  • GET /api/charts/<name>/<version>/values - get chart values
  • HEAD /api/charts/<name> - check if chart exists (any versions)
  • HEAD /api/charts/<name>/<version> - check if chart version exists

Server Info

  • GET / - HTML welcome page
  • GET /info - returns current ChartMuseum version
  • GET /health - returns 200 OK

Uploading a Chart Package

Follow "How to Run" section below to get ChartMuseum up and running at http://localhost:8080

First create mychart-0.1.0.tgz using the Helm CLI:

cd mychart/
helm package .

Upload mychart-0.1.0.tgz:

curl --data-binary "@mychart-0.1.0.tgz" http://localhost:8080/api/charts

If you've signed your package and generated a provenance file, upload it with:

curl --data-binary "@mychart-0.1.0.tgz.prov" http://localhost:8080/api/prov

Both files can also be uploaded at once (or one at a time) on the /api/charts route using the multipart/form-data format:

curl -F "[email protected]" -F "[email protected]" http://localhost:8080/api/charts

You can also use the helm-push plugin:

helm cm-push mychart/ chartmuseum

Installing Charts into Kubernetes

Add the URL to your ChartMuseum installation to the local repository list:

helm repo add chartmuseum http://localhost:8080

Search for charts:

helm search repo chartmuseum/

Install chart:

helm install chartmuseum/mychart --generate-name

How to Run

CLI

Installation

You can use the installer script:

curl https://raw.githubusercontent.com/helm/chartmuseum/main/scripts/get-chartmuseum | bash

or download manually from the releases page, which also contains all package checksums and signatures.

Determine your version with chartmuseum --version.

Configuration

Show all CLI options with chartmuseum --help. Common configurations can be seen below.

All command-line options can be specified as environment variables, which are defined by the command-line option, capitalized, with all -'s replaced with _'s.

For example, the env var STORAGE_AMAZON_BUCKET can be used in place of --storage-amazon-bucket.

Using a configuration file

Use chartmuseum --config config.yaml to read configuration from a file.

When using file-based configuration, the corresponding option name can be looked up in pkg/config/vars.go. It would be the key of configVars entry corresponding to the command line option / environment variable. For example, --storage corresponds to storage.backend in the configuration file.

Here's a complete example of a config.yaml:

debug: true
port: 8080
storage.backend: local
storage.local.rootdir: <storage_path>
bearerauth: 1
authrealm: <authorization server url>
authservice: <authorization server service name>
authcertpath: <path to authorization server public pem file>
authactionssearchpath: <optional: JMESPath to find allowed actions in a jwt token>
depth: 2

Using with Amazon S3 or Compatible services like Minio or DigitalOcean.

Make sure your environment is properly setup to access my-s3-bucket

For Amazon S3, endpoint is automatically inferred.

chartmuseum --debug --port=8080 \
  --storage="amazon" \
  --storage-amazon-bucket="my-s3-bucket" \
  --storage-amazon-prefix="" \
  --storage-amazon-region="us-east-1"

For S3 compatible services like Minio, set the credentials using environment variables and pass the endpoint.

export AWS_ACCESS_KEY_ID=""
export AWS_SECRET_ACCESS_KEY=""
chartmuseum --debug --port=8080 \
  --storage="amazon" \
  --storage-amazon-bucket="my-s3-bucket" \
  --storage-amazon-prefix="" \
  --storage-amazon-region="us-east-1" \
  --storage-amazon-endpoint="my-s3-compatible-service-endpoint"

You need at least the following permissions inside your IAM Policy

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "AllowListObjects",
      "Effect": "Allow",
      "Action": [
        "s3:ListBucket"
      ],
      "Resource": "arn:aws:s3:::my-s3-bucket"
    },
    {
      "Sid": "AllowObjectsCRUD",
      "Effect": "Allow",
      "Action": [
        "s3:DeleteObject",
        "s3:GetObject",
        "s3:PutObject"
      ],
      "Resource": "arn:aws:s3:::my-s3-bucket/*"
    }
  ]
}

In order to work with AWS service accounts you may need to set AWS_SDK_LOAD_CONFIG=1 in your environment. For more context, please see here.

If you are using S3-Compatible storage, provider of S3 storage has disabled path-style and force virtual hosted-style, you can use specify storage-amazon-force-path-style options as following example:

export AWS_ACCESS_KEY_ID=""
export AWS_SECRET_ACCESS_KEY=""
chartmuseum --debug --port=8080 \
  --storage="amazon" \
  --storage-amazon-bucket="my-s3-bucket" \
  --storage-amazon-prefix="" \
  --storage-amazon-region="us-east-1" \
  --storage-amazon-endpoint="my-s3-compatible-service-endpoint"
  --storage-amazon-force-path-style=false

For DigitalOcean, set the credentials using environment variable and pass the endpoint. Note below, that the region us-east-1 needs to be set, since that is how the DigitalOcean cli implementation functions. The actual region of your spaces location is defined by the endpoint. Below we are using Frankfurt as an example.

export AWS_ACCESS_KEY_ID="spaces_access_key"
export AWS_SECRET_ACCESS_KEY="spaces_secret_key"
  chartmuseum --debug --port=8080 \
  --storage="amazon" \
  --storage-amazon-bucket="my_spaces_name" \
  --storage-amazon-prefix="my_spaces_name_subfolder" \
  --storage-amazon-region="us-east-1" \
  --storage-amazon-endpoint="https://fra1.digitaloceanspaces.com"

The access_key and secret_key can be generated from the DigitalOcean console, under the section API/Spaces_access_keys.

Note: on certain S3-based storage backends, the LastModified field on objects is truncated to the nearest second. For more info, please see issue #152.

In order to mitigate this, you may use use the --storage-timestamp-tolerance option. For example, to round to the nearest second, you could use --storage-timestamp-tolerance=1s. For acceptable values to use for this field, please see here.

Using with Google Cloud Storage

Make sure your environment is properly setup to access my-gcs-bucket.

One way to do so is to set the GOOGLE_APPLICATION_CREDENTIALS var in your environment, pointing to the JSON file containing your service account key:

export GOOGLE_APPLICATION_CREDENTIALS="/home/user/Downloads/[FILE_NAME].json"

More info on Google Cloud authentication can be found here.

chartmuseum --debug --port=8080 \
  --storage="google" \
  --storage-google-bucket="my-gcs-bucket" \
  --storage-google-prefix=""

Using with Microsoft Azure Blob Storage

Make sure your environment is properly setup to access mycontainer.

To do so, you must set the following env vars:

  • AZURE_STORAGE_ACCOUNT
  • AZURE_STORAGE_ACCESS_KEY
chartmuseum --debug --port=8080 \
  --storage="microsoft" \
  --storage-microsoft-container="mycontainer" \
  --storage-microsoft-prefix=""

Using with Alibaba Cloud OSS Storage

Make sure your environment is properly setup to access my-oss-bucket.

To do so, you must set the following env vars:

  • ALIBABA_CLOUD_ACCESS_KEY_ID
  • ALIBABA_CLOUD_ACCESS_KEY_SECRET
chartmuseum --debug --port=8080 \
  --storage="alibaba" \
  --storage-alibaba-bucket="my-oss-bucket" \
  --storage-alibaba-prefix="" \
  --storage-alibaba-endpoint="oss-cn-beijing.aliyuncs.com"

Using with Openstack Object Storage

Make sure your environment is properly setup to access mycontainer.

To do so, you must set the following env vars (depending on your openstack version):

  • OS_AUTH_URL
  • either OS_PROJECT_NAME or OS_TENANT_NAME or OS_PROJECT_ID or OS_TENANT_ID
  • either OS_DOMAIN_NAME or OS_DOMAIN_ID
  • either OS_USERNAME or OS_USERID
  • OS_PASSWORD
chartmuseum --debug --port=8080 \
  --storage="openstack" \
  --storage-openstack-container="mycontainer" \
  --storage-openstack-prefix="" \
  --storage-openstack-region="myregion"

For Swift V1 Auth you must set the following env vars:

  • ST_AUTH
  • ST_USER
  • ST_KEY
chartmuseum --debug --port=8080 \
  --storage="openstack" \
  --storage-openstack-auth="v1" \
  --storage-openstack-container="mycontainer" \
  --storage-openstack-prefix=""

Using with Oracle Cloud Infrastructure Object Storage

Make sure your environment is properly setup to access my-ocs-bucket.

More info on Oracle Cloud Infrastructure authentication can be found here.

chartmuseum --debug --port=8080 \
  --storage="oracle" \
  --storage-oracle-bucket="my-ocs-bucket" \
  --storage-oracle-prefix="" \
  --storage-oracle-compartmentid="ocid1.compartment.oc1..1234"

Using with Baidu Cloud BOS Storage

Make sure your environment is properly setup to access my-bos-bucket.

To do so, you must set the following env vars:

  • BAIDU_CLOUD_ACCESS_KEY_ID
  • BAIDU_CLOUD_ACCESS_KEY_SECRET
chartmuseum --debug --port=8080 \
  --storage="baidu" \
  --storage-baidu-bucket="my-bos-bucket" \
  --storage-baidu-prefix="" \
  --storage-baidu-endpoint="bj.bcebos.com"

Using with Tencent Cloud COS Storage

Make sure your environment is properly setup to access my-cos-bucket.

To do so, you must set the following env vars:

  • TENCENT_CLOUD_COS_SECRET_ID
  • TENCENT_CLOUD_COS_SECRET_KEY
chartmuseum --debug --port=8080 \
  --storage="tencent" \
  --storage-tencent-bucket="my-cos-bucket" \
  --storage-tencent-prefix="" \
  --storage-tencent-endpoint="cos.ap-beijing.myqcloud.com"

Using with etcd

To use etcd as backend you need the CA certificate and the signed key pair. See here

chartmuseum --debug --port=8080 \
  --storage="etcd" \
  --storage-etcd-cafile="/path/to/ca.crt" \
  --storage-etcd-certfile="/path/to/server.crt" \
  --storage-etcd-keyfile="/path/to/server.key" \
  --storage-etcd-prefix="" \
  --storage-etcd-endpoint="http://localhost:2379"

Using with local filesystem storage

Make sure you have read-write access to ./chartstorage (will create if doesn't exist on first upload)

chartmuseum --debug --port=8080 \
  --storage="local" \
  --storage-local-rootdir="./chartstorage"

Basic Auth

If both of the following options are provided, basic http authentication will protect all routes:

  • --basic-auth-user=<user> - username for basic http authentication
  • --basic-auth-pass=<pass> - password for basic http authentication

You may want basic auth to only be applied to operations that can change Charts, i.e. PUT, POST and DELETE. So to avoid basic auth on GET operations use

  • --auth-anonymous-get - allow anonymous GET operations

Bearer/Token Auth

If all of the following options are provided, bearer auth will protect all routes:

  • --bearer-auth - enables bearer auth
  • --auth-realm=<realm> - authorization server url
  • --auth-service=<service> - authorization server service name
  • --auth-cert-path=<path> - path to authorization server public pem file
  • --auth-actions-search-path=<JMESPath> - (optional) JMESPath to find allowed actions in a jwt token

Using options above, ChartMuseum is configured with a public key, and will accept RS256 JWT tokens signed by the associated private key, passed in the Authorization header. You can use the chartmuseum/auth Go library to generate valid JWT tokens.

JWT Token without a custom JMESPath to find actions

In order to gain access to a specific resource, the JWT token must contain an access section in the claims. This section indicates which resources the user is able to access. Here is an example token payload:

{
  "exp": 1543995770,
  "iat": 1543995470,
  "access": [
    {
      "type": "artifact-repository",
      "name": "org1/repo1",
      "actions": [
        "pull"
      ]
    }
  ]
}

The type is always "artifact-repository", the name is the namespace/tenant (just use the string "repo" if using single-tenant server), and actions is an array of actions the user can perform ("pull" and/or "push).

If your JWT token structure is different, you can configure a JMESPath string. So you can define the way to find the allowed actions yourself. For the type and the the name you can use following placeholder

  • name: $NAMESPACE
  • type: $ACCESS_ENTRY_TYPE

E.g.: If you want to represent the default configuration, the JMESPath looks like: access[?name=='$NAMESPACE' && type=='$ACCESS_ENTRY_TYPE'].actions[].

For more information about how this works, please see chartmuseum/auth-server-example.

HTTPS

If both of the following options are provided, the server will listen and serve HTTPS:

  • --tls-cert=<crt> - path to tls certificate chain file
  • --tls-key=<key> - path to tls key file
HTTPS with Client Certificate Authentication

If the above HTTPS values are provided in addition to below, the server will listen and serve HTTPS and authenticate client requests against the CA certificate:

  • --tls-ca-cert=<cacert> - path to tls certificate file

Just generating index.yaml

You can specify the --gen-index option if you only wish to use ChartMuseum to generate your index.yaml file. Note that this will only work with --depth=0.

The contents of index.yaml will be printed to stdout and the program will exit. This is useful if you are satisfied with your current Helm CI/CD process and/or don't want to monitor another webservice.

Other CLI options

  • --log-json - output structured logs as json
  • --log-health - log incoming /health requests
  • --log-latency-integer - log latency as an integer (nanoseconds) instead of a string
  • --disable-api - disable all routes prefixed with /api
  • --disable-delete - explicitly disable the delete chart route
  • --disable-statefiles - disable use of index-cache.yaml
  • --allow-overwrite - allow chart versions to be re-uploaded without ?force querystring
  • --disable-force-overwrite - do not allow chart versions to be re-uploaded, even with ?force querystring
  • --chart-url=<url> - absolute url for .tgzs in index.yaml
  • --storage-amazon-endpoint=<endpoint> - alternative s3 endpoint
  • --storage-amazon-sse=<algorithm> - s3 server side encryption algorithm
  • --storage-openstack-cacert=<path> - path to a custom ca certificates bundle for openstack
  • --chart-post-form-field-name=<field> - form field which will be queried for the chart file content
  • --prov-post-form-field-name=<field> - form field which will be queried for the provenance file content
  • --index-limit=<number> - limit the number of parallel indexers
  • --context-path=<path> - base context path (new root for application routes)
  • --depth=<number> - levels of nested repos for multitenancy
  • --cors-alloworigin=<value> - value to set in the Access-Control-Allow-Origin HTTP header
  • --read-timeout=<number> - socket read timeout for http server
  • --write-timeout=<number> - socker write timeout for http server

Docker Image

Available via GitHub Container Registry (GHCR).

Example usage (local storage):

docker run --rm -it \
  -p 8080:8080 \
  -e DEBUG=1 \
  -e STORAGE=local \
  -e STORAGE_LOCAL_ROOTDIR=/charts \
  -v $(pwd)/charts:/charts \
  ghcr.io/helm/chartmuseum:v0.16.2

Example usage (S3):

docker run --rm -it \
  -p 8080:8080 \
  -e DEBUG=1 \
  -e STORAGE="amazon" \
  -e STORAGE_AMAZON_BUCKET="my-s3-bucket" \
  -e STORAGE_AMAZON_PREFIX="" \
  -e STORAGE_AMAZON_REGION="us-east-1" \
  -v ~/.aws:/home/chartmuseum/.aws:ro \
  ghcr.io/helm/chartmuseum:v0.16.2

Helm Chart

There is a Helm chart for ChartMuseum itself.

You can also view it on Artifact Hub.

To install:

helm repo add chartmuseum https://chartmuseum.github.io/charts
helm install chartmuseum/chartmuseum

If interested in making changes, please submit a PR to chartmuseum/charts. Before doing any work, please check for any currently open pull requests. Thanks!

Multitenancy

Multitenancy is supported with the --depth flag.

To begin, start with a directory structure such as

charts
├── org1
│   ├── repoa
│   │   └── nginx-ingress-0.9.3.tgz
├── org2
│   ├── repob
│   │   └── chartmuseum-0.4.0.tgz

This represents a storage layout appropriate for --depth=2. The organization level can be eliminated by using --depth=1. The default depth is 0 (singletenant server).

Start the server with --depth=2, pointing to the charts/ directory:

chartmuseum --debug --depth=2 --storage="local" --storage-local-rootdir=./charts

This example will provide two separate Helm Chart Repositories at the following locations:

  • http://localhost:8080/org1/repoa
  • http://localhost:8080/org2/repob

This should work with all supported storage backends.

To use the chart manipulation routes, simply place the name of the repo directly after "/api" in the route:

curl -F "[email protected]" http://localhost:8080/api/org1/repoa/charts

You may also experiment with the --depth-dynamic flag, which should allow for dynamic depth levels (i.e. all of /api/charts, /api/myrepo/charts, /api/org1/repoa/charts).

Pagination

For large chart repositories, you may wish to paginate the results from the GET /api/charts route.

To do so, add the offset and limit query params to the request. For example, to retrieve a list of 5 charts total, skipping the first 5 charts, you could use the following:

GET /api/charts?offset=5&limit=5

Cache

By default, the contents of index.yaml (per-tenant) will be stored in memory. This means that memory usage will continue to grow indefinitely as more charts are added to storage.

You may wish to offload this to an external cache store, especially for large, multitenant installations.

Cache Interval

When dealing with thousands of charts, you may experience latency with the default settings. This is because upon each request, the storage backend is scanned for changes compared to the cache.

If you are ok with index.yaml being out-of-date for a fixed period of time, you can improve performance by using the --cache-interval=<interval> option. When this setting is enabled, the charts available for each tenant are refreshed on a timer.

For example, to only check storage every 5 minutes, you can use --cache-interval=5m.

For valid values to use for this setting, please see here.

Using Redis

Example of using Redis as an external cache store:

chartmuseum --debug --port=8080 \
  --storage="local" \
  --storage-local-rootdir="./chartstorage" \
  --cache="redis" \
  --cache-redis-addr="localhost:6379" \
  --cache-redis-password="" \
  --cache-redis-db=0

Prometheus Metrics

ChartMuseum exposes its Prometheus metrics at the /metrics route on the main port. This can be enabled with the --enable-metrics command-line flag or the ENABLE_METRICS environment variable.

Note that the Kubernetes chart currently disables metrics by default (ENABLE_METRICS=false is set in the chart). The --disable-metrics command-line flag has be deprecated and will only be available in v0.14.0 and prior.

Below are the current application metrics exposed. Note that there is a per tenant (repo) label. The repo label corresponds to the depth parameter, so a depth=2 as the example above would have repo labels named org1/repoa and org2/repob.

Metric Type Labels Description
chartmuseum_charts_served_total Gauge {repo="*"} Total number of charts
chartmuseum_chart_versions_served_total Gauge {repo="*"} Total number of chart versions available

*: see above for repo label

There are other general global metrics harvested (per process, hence for all tenants). You can get the complete list by using the /metrics route.

Metric Type Labels Description
chartmuseum_request_duration_seconds Summary {quantile="0.5"}, {quantile="0.9"}, {quantile="0.99"} The HTTP request latencies in seconds
chartmuseum_request_duration_seconds_sum
chartmuseum_request_duration_seconds_count
chartmuseum_request_size_bytes Summary {quantile="0.5"}, {quantile="0.9"}, {quantile="0.99"} The HTTP request sizes in bytes
chartmuseum_request_size_bytes_sum
chartmuseum_request_size_bytes_count
chartmuseum_response_size_bytes Summary {quantile="0.5"}, {quantile="0.9"}, {quantile="0.99"} The HTTP response sizes in bytes
chartmuseum_response_size_bytes_sum
chartmuseum_response_size_bytes_count
go_goroutines Gauge Number of goroutines that currently exist

Notes on index.yaml

The repository index (index.yaml) is dynamically generated based on packages found in storage. If you store your own version of index.yaml, it will be completely ignored.

GET /index.yaml occurs when you run helm repo add chartmuseum http://localhost:8080 or helm repo update.

If you manually add/remove a .tgz package from storage, it will be immediately reflected in GET /index.yaml.

You are no longer required to maintain your own version of index.yaml using helm repo index --merge.

The --gen-index CLI option (described above) can be used to generate and print index.yaml to stdout.

Upon index regeneration, ChartMuseum will, however, save a statefile in storage called index-cache.yaml used for cache optimization. This file is only meant for internal use, but may be able to be used for migration to simple storage.

Mirroring the official Kubernetes repositories

Please see scripts/mirror-k8s-repos.sh for an example of how to download all .tgz packages from the official Kubernetes repositories (both stable and incubator).

You can then use ChartMuseum to serve up an internal mirror:

scripts/mirror-k8s-repos.sh
chartmuseum --debug --port=8080 --storage="local" --storage-local-rootdir="./mirror"

Custom Welcome Page

With the flag --web-template-path=<path>, you can specify the path to your custom welcome page.

The structure of the folder should be like this:

web/
  index.html
  xyz.html
  static/
      main.css
      main.js

ChartMuseum is using gin-gonic to serve the static files, this means that you can use go-template to render the files.

If you don't specify a custom welcome page, ChartMuseum will serve the default one.

Artifact Hub

By setting the flag --artifact-hub-repo-id <repo id>, ChartMuseum will serve a artifacthub-repo.yml file with the specified repo ID in the repositoryID field of the yaml file.

repositoryID: The ID of the Artifact Hub repository where the packages will be published to (optional, but it enables verified publisher)
Multitenancy

For multitenancy setups, you can provide a key value pair to the flag in the format: --artifact-hub-repo-id <repo>=<repo id>

chartmuseum --storage local --storage-local-rootdir /tmp/ --depth 1 --artifact-hub-repo-id org1=<repo id> --artifact-hub-repo-id org2=<repo2 id>

The artifacthub-repo.yml file will then be served at /org1/artifacthub-repo.yml and /org2/artifacthub-repo.yml

Original Logo

"Preserve your precious artifacts... in the cloud!"

Subprojects

The following subprojects are maintained by ChartMuseum:

Community

You can reach the ChartMuseum community and developers in the Kubernetes Slack #chartmuseum channel.

charts's People

Contributors

cbuto avatar cvila84 avatar eddycharly avatar eviln1 avatar gezb avatar guptaarvindk avatar jdolitsky avatar kd7lxl avatar medzin avatar mikesigs avatar mkilchhofer avatar out-of-band avatar pgdagenais avatar pytimer avatar ramneekgupta91 avatar rfashwall avatar scbizu avatar scottrigby avatar sebidude avatar sermilrod avatar smana avatar tbickford avatar tijmenstor avatar tuannvm avatar vanto avatar vespian avatar williambrode avatar ymrsmns avatar yurrriq avatar yves-vogl avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

charts's Issues

ChartMuseum caching is not working with dynamic aws credentials

I am deploying Chart museum using helm charts, and below is my configuration file

spec:
  values:
    env:
      open:
        STORAGE: amazon
        STORAGE_AMAZON_BUCKET: xxxx-helm-charts
        STORAGE_AMAZON_PREFIX: xxxx-charts-s3
        STORAGE_AMAZON_REGION: eu-central-1
        AWS_SHARED_CREDENTIALS_FILE: /aws/credentials
        AWS_REGION: eu-central-1
    extraArgs:
      - --cache-interval=15m
    podAnnotations:
        vault.hashicorp.com/agent-inject: "true"
        vault.hashicorp.com/role: "vault-kubernetes"
        vault.hashicorp.com/agent-configmap: 'xxxx-charts-configmap'
        vault.hashicorp.com/agent-inject-containers: "chartmuseum"
        vault.hashicorp.com/secret-volume-path: "/aws"
    serviceAccount:
      create: false
      name: "default"
      automountServiceAccountToken: true

I am using vault aws dynamic secret engine to fetch credentials for connecting to s3. All is working fine, except i am getting this error (as below) in my chartmuseum container logs. The secret is rotated successfully by dynamic secret engine but somehow the chartmuseum code that is calling s3 as per the cache-interval is still using the old credentials. It resolves if we restart it but we do not want to add this restart.

_{"L":"INFO","T":"2023-04-08T19:35:17.293Z","M":"Rebuilding index for tenant","repo":""}
{"L":"ERROR","T":"2023-04-08T19:35:17.371Z","M":"InvalidAccessKeyId: The AWS Access Key Id you provided does not exist in our records.\n\tstatus code: 403, request id: XXXXXXXXXXXXX, host id: 9+****************************************************************************************=","repo":""}_

Error service annotation processing

Error annotation processing

Error: YAML parse error on chartmuseum/templates/service.yaml: error converting YAML to JSON: yaml: line 6: did not find expected key

Use --debug flag to render out invalid YAML

Steps to reproduce

create values.yaml

service:
  annotations:
    foo: "1"
    bar: "2"

try to render template

helm template some-name chartmuseum/chartmuseum --version 3.1.0 -f values.yaml -s templates/service.yaml

Missing config in documentation for service accounts and S3 backend

This is from the documentation at https://github.com/chartmuseum/charts/tree/main/src/chartmuseum#permissions-grant-with-iam-roles-for-service-accounts

permissions grant with IAM Roles for Service Accounts

For Amazon EKS clusters, access can be provided with a service account using IAM Roles for Service Accounts.

Specify custom.yaml with such values

env:
  open:
    STORAGE: amazon
    STORAGE_AMAZON_BUCKET: my-s3-bucket
    STORAGE_AMAZON_PREFIX:
    STORAGE_AMAZON_REGION: us-east-1
serviceAccount:
  create: true
  annotations:
    eks.amazonaws.com/role-arn: "arn:aws:iam::{aws account ID}:role/{assumed role name}"

the provided value file doesn't work, using it as it is now will use the node's role for S3 access instead of the service account's role, and resulted in access denied for me.
This should be added to environment variables: AWS_SDK_LOAD_CONFIG: true

env:
  open:
    AWS_SDK_LOAD_CONFIG: true
    STORAGE: amazon
    STORAGE_AMAZON_BUCKET: my-s3-bucket
    STORAGE_AMAZON_PREFIX:
    STORAGE_AMAZON_REGION: us-east-1
serviceAccount:
  create: true
  annotations:
    eks.amazonaws.com/role-arn: "arn:aws:iam::{aws account ID}:role/{assumed role name}"

Separate Secrets for HTTP Basic Auth and Cloud Credentials

I will briefly describe my scenario:
On our cluster we run ArgoCD alongside Chartmuseum. They belong to the "core" services of the cluster that are needed for all further functionality. Chartmuseum hosts our private Charts and ArgoCD is responsible for CD.
We want to create a Secret chartmuseum-http-auth that contains username/password for HTTP Basic Auth. ArgoCD and Chartmuseum deployments should read from that Secret to get/set credentials.
This Secret would be created before Chartmuseum itself is deployed.
Additionally we have to ship Cloud Credentials with the Chartmuseum deployment to access GCS/S3/etc. Those would be deployed as part of the Chartmuseum deployment.

The issue: We can not read the credentials from different Secrets. chartmuseum/templates/secret is only created if Values.env.existingSecret is not set. See: https://github.com/chartmuseum/charts/blob/main/src/chartmuseum/templates/secret.yaml#L1
In the deployment however we can only pass one secret name. See: https://github.com/chartmuseum/charts/blob/main/src/chartmuseum/templates/deployment.yaml#L92

My ideal workflow: The deployment of Chartmuseum would create the Secret containing Cloud Credentials just as it is doing now. However I want to be able to read HTTP BA credentials from a different Secret, that I was (somehow) created before the Chartmuseum deployment.

Does that make sense for anybody else as well? 😁

CHART_URL not working properly

Hey folks,

I've deployed chartmuseum using the chart, and I've set CHART_URL like this:

env:
  open:
    CHART_URL: "http://helm.mycompany.com"

But when I fetch a package I get a 404 not found error with this message:

Error: failed to fetch http://helm.mycompany.com/helm.mycompany.com/charts/mycompany-app-1.1.tgz : 404 Not Found

As you can see the base url is repeated.

Here is the index.yaml:

apiVersion: v1
entries:
  mycompany-app:
  - apiVersion: v2
    appVersion: 1.16.0
    created: "2022-08-12T10:50:24.966478774Z"
    description: A Helm chart for Kubernetes
    digest: **DIGEST**
    name: mycompany-app
    type: application
    urls:
    - helm.mycompany.com/charts/mycompany-app-1.1.tgz
    version: "1.1"
generated: "2022-08-12T10:50:25Z"
serverInfo: {}

I wonder why the http is removed from env var.
I deployed the chart once with CHART_URL: "helm.mycompany.com" but I updated then uninstalled and reinstalled it with the proper CHART_URL but I still can't install or fetch because of this base domaine repetition.

Any clues?

Deployment option to allow strategy Recreate

There is an issue when you use an pvc to store your charts and want to re-deploy.

In this case, whenever you re-deploy, as the default rolling update strategy is set to the deployment, the volume will not be able to be attached to the new pod as it's still attached to the old pod. It will never be on Ready status and deployment will fail.

Maybe it's a good idea to set the deployment strategy to Recreate if pvc is set for the release?

Ability to source secrets from file path

Use case: Vault sidecar injection e.g. https://www.vaultproject.io/docs/platform/k8s/injector/examples#environment-variable-example

Since the entrypoint is /chartmuseum, it's not possible to source as in Hashicorp's examples.

I've also attempted the following, but Kubernetes only seems to interpolate for variable names that have already been defined.

extraArgs:
  - --basic-auth-user=$(cat /vault/secrets/config | awk '{print $1}')
  - --basic-auth-pass=$(cat /vault/secrets/config | awk '{print $2}')

Maybe the optimal solution would be to simply allow a onStart script or something that happens before /chartmuseum starts, so that you could source the environment variables?

Deprecated use of `targetPort` in ServiceMonitor

The Prometheus operator have deprecated the .spec.endpoints.targetPort in ServiceMonitors in favor of port.

Atm. the Service in the Helm chart points directly to service.targetPort if used (https://github.com/chartmuseum/charts/blob/main/src/chartmuseum/templates/service.yaml#L38), otherwise directly to the named http port of the main Chartmuseum container.

If the ServiceMonitor should still point towards the http port, an additional port should be opened when using service.targetPort.
Such that the Service goes from

  ports:
  - port: 8080
    targetPort: sidecar-http
    name: sidecar-http
    protocol: TCP

to

  ports:
  - port: 8080
    targetPort: http
    name: http
    protocol: TCP
  - port: 8080 # opening two ports with the same port number is probably not allowed
    targetPort: sidecar-http
    name: sidecar-http
    protocol: TCP

The Ingress resource should be changed accordingly, such as https://github.com/chartmuseum/charts/blob/main/src/chartmuseum/templates/ingress.yaml#L43

If what i propose above would be the preferred solution, I could take a look at it.

Enable Support for HPA in chartmuseum

Overview:

We deployed chart museum using ArgoCD but we are unable to use HPA for scaling out and in of the replicasets for chart museum deployments.
this is because we have default value for replicas: 1 in the values.yaml file which is the default value added to the deployment object.

As per kubernetes documentation below in reference, we could not use both replicas in deployment and enable custom HPA.

Proposal:

  • Would it possible to enable HPA for chartmuseum and add the condition for replicas in deployment spec if we have HPA enabled?

Reference:

Assume DISABLE_METRICS=false when serviceMonitor is enabled

Small quality-of-life enhancement suggestion:

If serviceMonitor.enabled is true, it stands to reason that the administrator intends for metrics to be scraped. In this case, env.open.DISABLE_METRICS should default to false unless explicitly overridden by user-supplied chart values.

401 when configuring s3

Hi,

When deploying chartmuseum helm chart with following configuration:

Helm chart values (using 3.9.3):

podAnnotations:
  eks.amazonaws.com/sts-regional-endpoints: "true"

extraArgs:
  - --cache-interval=1m

env:
  open:
    AWS_SDK_LOAD_CONFIG: true
    STORAGE: amazon
    STORAGE_AMAZON_BUCKET: <BUCKET>
    STORAGE_AMAZON_PREFIX:
    STORAGE_AMAZON_REGION: <REGION>
    CHART_POST_FORM_FIELD_NAME: chart
    PROV_POST_FORM_FIELD_NAME: prov
    DEPTH: 2
    DEBUG: true
    LOG_JSON: true
    DISABLE_STATEFILES: true
    ENABLE_METRICS: true
    DISABLE_API: false
    ALLOW_OVERWRITE: false
  existingSecret: chartmuseum-creds
  existingSecretMappings:
    BASIC_AUTH_USER: username
    BASIC_AUTH_PASS: password

service:
  type: NodePort

serviceMonitor:
  enabled: true

serviceAccount:
  create: false
  name: chartmuseum

ingress:
  enabled: true
  pathType: Prefix
  annotations:
    kubernetes.io/ingress.class: alb
    alb.ingress.kubernetes.io/scheme: internet-facing
    alb.ingress.kubernetes.io/certificate-arn: <ARN>
  hosts:
    - name: chartmuseum.<DOMAIN>
      path: /

Serviceaccount:

apiVersion: v1
automountServiceAccountToken: true
kind: ServiceAccount
metadata:
  annotations:
    eks.amazonaws.com/role-arn: <ROLE>
  name: chartmuseum
  namespace: default

Terraform resources:

resource "aws_s3_bucket" "chartmuseum" {
  bucket = <BUCKET>
}

// <CUSTOM_K8S_IAM_ROLE_MODULE>

data "aws_iam_policy_document" "chartmuseum_policy" {
  statement {
    actions = [
      "s3:ListBucket"
    ]
    resources = [
      "arn:aws:s3:::<BUCKET>"
    ]
  }
  statement {
    actions = [
      "s3:DeleteObject",
      "s3:GetObject",
      "s3:PutObject"
    ]
    resources = [
      "arn:aws:s3:::<BUCKET>/*"
    ]
  }
}

When the pod is running I'm getting a loop of 401, such as:

{"L":"DEBUG","T":"2023-03-13T18:05:10.227Z","M":"[723] Incoming request: /","reqID":"6e8395d4-98b5-4254-9646-454a15ff1b50"}
{"L":"ERROR","T":"2023-03-13T18:05:10.228Z","M":"[723] Request served","path":"/","comment":"","clientIP":"10.4.51.192","method":"GET","statusCode":401,"latency":"21.228µs","reqID":"6e8395d4-98b5-4254-9646-454a15ff1b50"}

Any suggestion on what the problem could be?

Thank you,
André Nogueira

Basic Authentication not working

Hi ,

we are using package Version:
Chartmuseum helm chart version: 3.7.1
Chartmuseum image version: 0.13.1

we tested Basic Authentication as given in this page by creating secreate
https://github.com/chartmuseum/charts/tree/main/src/chartmuseum#authentication

kubectl create secret generic chartmuseum-secret --from-literal="basic-auth-user=curator" --from-literal="basic-auth-pass=mypassword"

We see following 2 problems :
when we open https://chartmuseum.apps.XXXX.com/api/charts from browser

  1. 1st time browser ask for user/pass but on 2nd time even when browser is closed and re-open its not asking for user/pass .. looks session remember user /pass for longer duration

  2. We changed the pass , but still browser able to connect and show the data with old pass !!

Let us know how to fix this issue

Regards
Ramki

Chartmuseum Pod Runs but gives an application unavailable from its routed url.

We transitioned to using this chart after the original helm chart for chart museum was deprecated. The new instance has been deployed however the application seems to not be actually serving the endpoint. It is running in ocp4, here are some logs we got from our instance in ocp3 vs. ocp4. Lets us know if more information would be needed for assistance.

Ocp3 pod logs
image

Ocp4 pod logs
image

Install from chart to k8s with TLS

Hi,
Going through the installation instructions and chart templates, I find no option for enabling TLS in the application but only on ingress.
What are the steps required to enable TLS in the application? If it's unnecessary or impossible, how can I enable TLS for the ingress while the application is still only listening on 8080 (HTTP)?
Thanks

Ingress ServiceBackendPort: "use-annotation" got "string", expected "integer" #581

From @cpboyd (helm/chartmuseum#581)

From https://artifacthub.io/packages/helm/chartmuseum/chartmuseum#extra-paths:

helm install my-chartmuseum chartmuseum/chartmuseum \
  --set ingress.enabled=true \
  --set ingress.hosts[0].name=chartmuseum.domain.com \
  --set ingress.extraPaths[0].service=ssl-redirect \
  --set ingress.extraPaths[0].port=use-annotation \

This, however, results in the following error:

Error: INSTALLATION FAILED: unable to build kubernetes objects from release manifest: error validating "": error validating data: ValidationError(Ingress.spec.rules[0].http.paths[0].backend.service.port.number): invalid type for io.k8s.api.networking.v1.ServiceBackendPort.number: got "string", expected "integer"

Replacing use-annotation with an integer works, but it'd be nice to simply refer to port defined in the policy.

Is bug? initcontainer no working when use securityContext and persistence

in deployment.yaml, the control constructure like below code:
{{- if .Values.securityContext.enabled }}
...
{{- else if .Values.persistence.enabled }}
...
{{- end}}

i think the correct is

{{- if .Values.securityContext.enabled }}
...
{{- if .Values.persistence.enabled }}
...
{{- end}}
{{- end}}

can't POST anything

Hi guys,
I'm trying to setup my chartmuseum, it looks like I miss something.
Chart itself installed straight forward, I see the welcome to ChartMuseum start page in the browser,
I also see "health"

curl https://charts.my.page/health
{"healthy":true}

but I can't see info and can't POST anything:

curl https://charts.my.page/info
{"error":"not found"}

curl --data-binary "@odoo-17.0.2.tgz" https://charts.my.page/api/charts
{"error":"not found"}

I'm on Rancher 2.5.4, RKE k8s 1.19.4.
this is my yaml I used during the chart installation:

---
  env: 
    open: 
      DEBUG: "false"
  ingress:
    enabled: true
    hosts:
      - name: charts.my.page
        path: /
        tls: true
        tlsSecret: charts.my.page-tls
    certManager: true
    annotations:
      cert-manager.io/cluster-issuer: letsencrypt-prod
      kubernetes.io/ingress.class: nginx
      kubernetes.io/tls-acme: "true"
  persistence: 
    accessMode: "ReadWriteMany"
    enabled: "true"
    size: "2Gi"
    storageClass: "rook-cephfs"
  replicaCount: "2"

Chart-museum chart doesn't comply with 'restricted' Pod Security Standard

Current 'restricted' kubernetes pod policy standarts (https://kubernetes.io/docs/concepts/security/pod-security-standards/) require the following to be set up:

spec:
  template:
    spec:
      securityContext:
        runAsNonRoot: true
        seccompProfile:
          type: RuntimeDefault

Current helm chart contains setting for runAsNonRoot but not for seccompProfile

Suggestion:
chart-museum should contain options to specify non-default seccompProfile.
Ideally, fully custom securityContext should be possible

I can do a pullrequest

BUG - Invalid formatting for multiple service annotations

If you put multiple service annotations in the values.yaml file the template will render and invalid format because the spacing will be off.

To recreate, setup values.yaml to look like

service:
  servicename: chartmuseum
  type: LoadBalancer
  loadBalancerSourceRanges:
    - 1.2.3.4/5
  annotations:
    service.beta.kubernetes.io/aws-load-balancer-internal: "true"
    external-dns.alpha.kubernetes.io/hostname: charts.my.domain

Attempt to render the chart

helm template chartmuseum/chartmuseum -f values.yaml         
Error: YAML parse error on chartmuseum/templates/service.yaml: error converting YAML to JSON: yaml: line 6: did not find expected key

with debug (working parts omitted)

---
# Source: chartmuseum/templates/service.yaml
apiVersion: v1
kind: Service
metadata:
  name: chartmuseum
  annotations:
        external-dns.alpha.kubernetes.io/hostname: charts.my.domain
    service.beta.kubernetes.io/aws-load-balancer-internal: "true"
  labels:
    helm.sh/chart: chartmuseum-3.1.0
    app.kubernetes.io/name: chartmuseum
    app.kubernetes.io/instance: RELEASE-NAME
    app.kubernetes.io/version: "0.13.1"
    app.kubernetes.io/managed-by: Helm
spec:
  type: LoadBalancer
  externalTrafficPolicy: Local
  loadBalancerSourceRanges:
  - 1.2.3.4/5
  ports:
  - port: 8080
    targetPort: http
    protocol: TCP
    name: http
  selector:
    app.kubernetes.io/name: chartmuseum
    app.kubernetes.io/instance: RELEASE-NAME
.....(other objects rendered properly) .....

Error: YAML parse error on chartmuseum/templates/service.yaml: error converting YAML to JSON: yaml: line 6: did not find expected key
helm.go:81: [debug] error converting YAML to JSON: yaml: line 6: did not find expected key
YAML parse error on chartmuseum/templates/service.yaml
helm.sh/helm/v3/pkg/releaseutil.(*manifestFile).sort
	helm.sh/helm/v3/pkg/releaseutil/manifest_sorter.go:146
helm.sh/helm/v3/pkg/releaseutil.SortManifests
	helm.sh/helm/v3/pkg/releaseutil/manifest_sorter.go:106
helm.sh/helm/v3/pkg/action.(*Configuration).renderResources
	helm.sh/helm/v3/pkg/action/action.go:165
helm.sh/helm/v3/pkg/action.(*Install).Run
	helm.sh/helm/v3/pkg/action/install.go:240
main.runInstall
	helm.sh/helm/v3/cmd/helm/install.go:242
main.newTemplateCmd.func2
	helm.sh/helm/v3/cmd/helm/template.go:73
github.com/spf13/cobra.(*Command).execute
	github.com/spf13/[email protected]/command.go:850
github.com/spf13/cobra.(*Command).ExecuteC
	github.com/spf13/[email protected]/command.go:958
github.com/spf13/cobra.(*Command).Execute
	github.com/spf13/[email protected]/command.go:895
main.main
	helm.sh/helm/v3/cmd/helm/helm.go:80
runtime.main
	runtime/proc.go:225
runtime.goexit
	runtime/asm_amd64.s:1371

Notice how the two annotations for the Service are spaced differently. This throws off the parser

Support annotations on Service

Google Cloud relies on service annotations to enable certain loadbalancer features (like Cloud CDN). It would be useful to have this exposed via the helm chart.

chartmuseum.com website is down

The chartmuseum.com website is down. Looks like there is an automated build failure on the website site. Consequently, all the documentation is offline which i need to access so that i can get my k8s persistent volume deployment working with the chartmuseum pod. I would really like to see more documentation on the --storage-local-rootdir="./chartstorage" usage, since nothing indicates where this path is auto-created on the linux file system. There are definitely some gabs in the docs with the container image running as user chartmuseum and the permissions necessary to add persistent volume. Would be helpful if you baked into the image a /storage directory with the correct permissions so that it could be used for storing the helm chart data without having to hack the pod deployment configuration with initContainers and securityContexts.

AWS Load Balancer and Chart Museum issue

When trying to install the latest chart version with aws-load-balancer-controller.

Diagnostics:
  eks:index:Cluster$aws:eks/cluster:Cluster (eks-cluster-eksCluster)
    Cluster is ready
 
  kubernetes:helm.sh/v3:Chart$kubernetes:apps/v1:Deployment (chartmuseum/chartmuseum)
    [1/2] Waiting for app ReplicaSet be marked available
    warning: [MinimumReplicasUnavailable] Deployment does not have minimum availability.
    warning: [ProgressDeadlineExceeded] ReplicaSet "chartmuseum-78cbfc496f" has timed out progressing.
    [1/2] Waiting for app ReplicaSet be marked available (0/1 Pods available)
    warning: [Pod chartmuseum/chartmuseum-78cbfc496f-8r9kt]: containers with unready status: [chartmuseum]
 
  kubernetes:helm.sh/v3:Chart$kubernetes:core/v1:Service (chartmuseum/chartmuseum)
    [1/3] Finding Pods to direct traffic to
 
  kubernetes:helm.sh/v3:Chart$kubernetes:networking.k8s.io/v1:Ingress (chartmuseum/chartmuseum)
    Retry #0; creation failed: Internal error occurred: failed calling webhook "vingress.elbv2.k8s.aws": failed to call webhook: Post "https://aws-load-balancer-webhook-service.aws-lb-controller-ns.svc:443/validate-networking-v1-ingress?timeout=10s": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "aws-load-balancer-controller-ca")
    error: resource chartmuseum/chartmuseum was not successfully created by the Kubernetes API server : Internal error occurred: failed calling webhook "vingress.elbv2.k8s.aws": failed to call webhook: Post "https://aws-load-balancer-webhook-service.aws-lb-controller-ns.svc:443/validate-networking-v1-ingress?timeout=10s": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "aws-load-balancer-controller-ca")
chartMuseum = Chart(
    'chartmuseum',
    ChartOpts(
        chart="chartmuseum",
        version="3.9.3",
        fetch_opts=FetchOpts(
            repo="https://chartmuseum.github.io/charts"
        ),
        namespace=name,
        values={
            "ingress": {
                "enabled": True,
                "ingressClassName": "alb",
                "pathType": "ImplementationSpecific",
                "annotations": {
                    "alb.ingress.kubernetes.io/backend-protocol": "HTTP",
                    "alb.ingress.kubernetes.io/listen-ports": '[{"HTTPS":443},{"HTTP":80}]',
                    "alb.ingress.kubernetes.io/load-balancer-attributes":"idle_timeout.timeout_seconds=300",
                    "alb.ingress.kubernetes.io/scheme": "internet-facing",
                    "alb.ingress.kubernetes.io/ssl-redirect": "443"
                },
                "hosts": [
                    {
                        "name": f"{chartHostname}.{zoneName}",
                        "path": "/",

                        "tls": False
                    },
                ],
            },
            "env": {
                "open": {
                    "STORAGE": "amazon",
                    "STORAGE_AMAZON_BUCKET": cm_bucket.bucket,
                    "STORAGE_AMAZON_REGION": cm_bucket.region,
                    "DEBUG": True,
                    "DISABLE_API": False,
                    "ALLOW_OVERWRITE": True,
                    "AUTH_ANONYMOUS_GET": False,
                    "DEPTH": 1,
                    "AWS_SDK_LOAD_CONFIG": True,
                },
                "secret": {
                    "BASIC_AUTH_USER": "*****",
                    "BASIC_AUTH_PASS": "********",
                }
            },
            "serviceAccount": {
                "create": True,
                "annotations": {
                    "eks.amazonaws.com/role-arn": cm_role.arn
                }
            }
        }
    ),
    opts=pulumi.ResourceOptions(provider=provider,
                                depends_on=[alb_chart, cm_bucket])
)

duplicate labels in pvc

When enabling persistence, the pvc-template duplicates labels which result in an invalid manifest, cf.

Error:

$ helm template myrelease chartmuseum/chartmuseum --set persistence.enabled=true
...
# Source: chartmuseum/templates/pvc.yaml   
kind: PersistentVolumeClaim                
apiVersion: v1                             
metadata:                                  
  name: myrelease-chartmuseum              
  labels:                                  
    helm.sh/chart: chartmuseum-3.9.2       
    app.kubernetes.io/name: chartmuseum    
    app.kubernetes.io/instance: myrelease  
    app.kubernetes.io/version: "0.15.0"    
    app.kubernetes.io/managed-by: Helm     
   # <---
    app.kubernetes.io/name: chartmuseum    
    app.kubernetes.io/instance: myrelease  
   # --->
...

Running Chartmuseum in HA mode ?

Hello,

With flux integration enabled and using spot instances we observe quite a few issues when downloading charts - the issue could be mitigated by running Chartmuseum in HA mode ?

This issue is more or less to verify whether its even possible with this chart ?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.