Git Product home page Git Product logo

op-scim-helm's Introduction

1Password SCIM bridge Helm chart

This is the offical Helm chart for deploying the 1Password SCIM bridge.

The chart exists to facilitate our one-click deployment options for the Google Cloud Marketplace and DigitalOcean Marketplace applications. With this in mind the chart is tailored to our specific use case and will likely not meet the requirements of every configuration option or deployment scenario. For more general purpose deployment options please see our 1Password/scim-examples repository.

Installation guide

Install Helm

Install the latest version of Helm. See installing Helm from the official Helm documentation.

Add repository

helm repo add 1password https://1password.github.io/op-scim-helm
helm repo update

Install chart

helm install my-release 1password/op-scim-bridge

Uninstall chart

helm uninstall my-release

Available charts

Resource Recommendations

The default resource recommendations for the SCIM bridge and Redis deployments are acceptable in most scenarios, but they fall short in high volume deployments where there is a large number of users and/or groups.

We strongly recommend increasing both the SCIM bridge and Redis deployments.

Our current default resource requirements for the SCIM bridge (defined in values.yaml) and Redis (defined in values.yaml) are:

Expected Provisioned Users Resources
1-1000 Default
1000-5000 High Volume Deployment
5000+ Very High Volume Deployment
Default
requests:
cpu: 125m
memory: 256M

limits:
cpu: 250m
memory: 512M

Note that these are the recommended requests and limits values for both the SCIM bridge and Redis containers. These values can be scaled down again to the default values after the initial large provisioning event.

High Volume Deployment
requests:
  cpu: 500m
  memory: 512M

limits:
  cpu: 1000m
  memory: 1024M
Very High Volume Deployment
requests:
  cpu: 1000m
  memory: 1024M

limits:
  cpu: 2000m
  memory: 2048M

Updating resources

Updating the default values is a two-step process:

  1. Create a new file named override.yaml in the root directory of the op-scim-helm project, and copy the below content in this new file. We have provided the proposed recommendations for you.
# SCIM configuration options
scim:
  # resource sets the requests and/or limits for the SCIM bridge pod
  resources:
    requests:
      cpu: 500m
      memory: 512M
    limits:
      cpu: 1000m
      memory: 1024M
# Redis configuration options
redis:
  # resource sets the requests and/or limits for the Redis pod
  requests:
    cpu: 500m
    memory: 512M

  limits:
    cpu: 1000m
    memory: 1024M
  1. Upgrade the op-scim-bridge chart with the updated override.yaml values:
helm upgrade -f override.yaml op-scim-bridge 1password/op-scim-bridge

If successful, you should see the message Release "op-scim-bridge" has been upgraded. Happy Helming!

You can verify the changes by describing the deployment with kubectl and referencing the Limits and Requests sections of the op-scim-bridge container:

kubectl describe deploy op-scim-bridge

For further understanding of how Kubernetes measures resources, please see Resource units in Kubernetes

Please reach out to our support team if you need help with the configuration or to tweak the values for your deployment.

op-scim-helm's People

Contributors

ag-adampike avatar alicethorne-ab avatar andrey-ch-dev avatar chasdl avatar devillecodes avatar grellyd avatar hemal1p avatar kevinlangleyjr avatar kvincent2 avatar laughingman-hass avatar lazgilebits avatar muhwagwa avatar scottisloud avatar tylerasai avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

op-scim-helm's Issues

Move redis master kind to deployment

Currently we are using the default redis master kind (i.e. statefulset). This makes it really hard to upgrade between versions of the chart since stateful sets are designed to be immutable.

Since the SCIM bridge is largely a stateless application we should instead use a deployment that is not limited during rolling upgrades.

Add contribution guide

We should add a contribution guide (CONTRIBUTING.md) to help guide folks submitting issues and PRs against this project.

We should also add a link to the contribution guide in the project README.

Update pod anti-affinity topologyKey to "hostname"

While testing on GCP we discovered that some pods could not be scheduled because no nodes were available that match the pod anti-affinity rules.

Upon investigation we discovered that we were specifying the "zone" instead of the "hostname" (of the node) in the topology key. This meant that it would rule out all nodes in a zone if one of the nodes had a matching label. This is not what was desired.

We should update the topologyKey to rather use the hostname, i.e. kubernetes.io/hostname.

From the official documentation:

The anti-affinity rule says that the scheduler should try to avoid scheduling the Pod onto a node that is in the same zone as one or more Pods with the label security=S2. More precisely, the scheduler should try to avoid placing the Pod on a node that has the topology.kubernetes.io/zone=R label if there are other nodes in the same zone currently running Pods with the Security=S2 Pod label.

External redis/credentials setup got broken in 2.10.2

Your environment

Chart Version: any higher than 2.10.2

Helm Version: doesn't matter

Kubernetes Version: doesn't matter

What happened?

So as part of the last two "minor" releases (please keep in mind that changing default behavior is by no means minor), the following things have been broken:

  • using credentials secret together with custom redisUrl
  • having separate settings for scimsession, workspaceSettingsFile, workspaceCredentialsFile

What did you expect to happen?

  • better documentation (including examples of files that you want to be created/mounted like the workspace settings/credentials and scimsession file),
  • default behavior not changing between minor versions of the chart,

Steps to reproduce

Use the following yaml together with version 2.10.1 vs anything higher:

scim:
  name: op-scim
  credentials: 
    volume:
      enabled: false
    secrets:
      enabled: true
      create: false
      scimsession:
        name: 1p-secret
        key: scimsession
          
  imagePullPolicy: Always

  service:
    enabled: true
    type: ClusterIP

  ingress:
    enabled: true
    className: nginx
    hosts:
      - host: domain
        paths:
          - path: /
            pathType: ImplementationSpecific

  config:
    jsonLogs: true
    redisUrl: redis://PROPER_REDIS_URL
  tls:
    enabled: false

  resources:
    requests:
      cpu: 125m
      memory: 256M
    limits:
      cpu: 250m
      memory: 512M

  autoscaling:
    enabled: true
    minReplicas: 1
    maxReplicas: 2
    targetCPUUtilizationPercentage: 80

  podDisruptionBudget:
    enabled: false
    minAvailable: 1

  serviceAccount:
    create: true

redis:
  enabled: false

Notes & Logs

Use .Chart.name instead of .Values.applicationName

We should prefer using the Chart name value instead of creating a separate entry in the values.yaml file.

Currently we use applicationName where we can directly use the name declared in the Chart.yaml file.

Fix available charts link in README.md

The available charts link is broken now that we have renamed the chart.

Currently the link refers to:

https://github.com/1Password/op-scim-helm/tree/main/op-scim-bridge

But it should be pointing to:

https://github.com/1Password/op-scim-helm/tree/main/charts/op-scim-bridge

Google workspace credentials file value is misnamed in deployment template

The workspace credentials file value is set to workspaceCredentialsFile in the deployment template (deployment.yaml), but is called workspaceKeyFile in the values definition (values.yaml).

This results in the created kubernetes yaml being incorrect with an invalid OP_WORKSPACE_CREDENTIALS value:

            - name: "OP_WORKSPACE_CREDENTIALS"
              value: "/home/opuser/.op/"

We should rename workspaceKeyFile to workspaceCredentialsFile in the values definition.

Use redis in "standalone" mode instead of "replication"

Currently we use the default mode for bitnami/redis, which is "replication". By default we don't use any of the replication features. This results in us having only one "master" node with no "slaves". We're essentially mimicking standalone.

We should set the "architecture" to "standalone".

Fix helm commands in README

There is a typo in the repo README.

The install instruction reads:

helm install my-release 1password/op-scim-bridge

But this should be:

helm install my-release 1password/op-scim

Need to strip quotes on the debug flag.

I tried using the "debug" flag as I am having issues. However, the debug flag fails the deployment.
This is because the helm adds quotes to value.
Therefore, the OP_DEBUG value: line needs changing to strip the quotes, in the deployment.yaml

(P.s. There may be other fields that need the same.)

--- a/op-scim-bridge/templates/deployment.yaml
+++ b/op-scim-bridge/templates/deployment.yaml
@@ -108,7 +108,7 @@ spec:
             {{- end }}
             {{- if .Values.scim.config.debug }}
             - name: "OP_DEBUG"
-              value: {{ .Values.scim.config.debug }}
+              value: {{ .Values.scim.config.debug | quote -}}
             {{- end }}
             {{- if .Values.scim.config.jsonLogs }}
             - name: "OP_JSON_LOGS"

Update chart name to op-scim-bridge

Currently the name field in the Chart.yaml is set to "op-scim". This does not match what the chart should be called, which is "op-scim-bridge".

Further to that we also have several instances in the values and templates where we append "-bridge" to the chart name, for example (from values.yaml):

scim:
  # name sets the name used for the various constructs used by the SCIM bridge setup
  name: "{{ .Chart.Name }}-bridge"

We should fix this inconsistency and set the chart name correctly. We should also ensure that the downstream apps that depend on this chart are updated.

Remove extra "bridge" from service name

The Service name contains an extra unnecessary "bridge".

I.e.

name: {{ .Values.applicationName }}-bridge-svc

Should be:

name: {{ .Values.applicationName }}-svc

Redis URL does not refer to in-cluster redis deployment

When deploying via helm and a custom name (or any name), and not setting the scim.config.redisURL value (the default), the pod expects to find the redis service at 1password-redis-master and fails. This value should be set by the generated redis service name (to allow use of in-cluster dns) unless overriden by the config value.

FTL failed to set up redis for SCIM bridge error="Network: (failed to NewRedisConnPool), cannot initialize connection: cannot connect to redis: dial tcp: lookup 1password-redis-master on 172.20.0.10:53: no such host" application=op-scim build=
โ”‚ 203011 function=GetCache version=2.3.1

As a general note, there seem to be a couple of places where the design of this chart makes such, or similar, assumptions (lets encrypt url, domain, etc.). These are usually anti-patterns for helm, since as much as can be autogenerated should be. Not throwing shade here, this is very helpful and much appreciated!

I'm happy to help with any re-design or cleanup needed to enhance this chart so it's zero touch!

Fix whitespace in generated yaml

We are creating unnecessary newlines in the generated yaml. We should look into replacing {{ ... }} with {{- ... }} in the templates.

Update node anti-affinity from "required" to "preferred"

To resolve an issue between the resource utilization of the SCIM bridge and redis we introduced pod anti-affinity configurations for both these applications. We have since moved to set resource requests and limits for each, as well as configure redis to use a fixed amount of memory (with an eviction policy).

We can now relax the requirement to be more of a suggestion than a requirement for the kubernetes scheduler.

For more details see the official kubernetes documentation.

Helm Repo Index file contains wrong release dates

The index.yaml file contains the wrong release date value for the older charts in the created element. These should represent the release date of each chart version. Currently all of them uses the same value.

credentialsSecrets doesn't support redis.

Summary

It should be possible to set the OP_REDIS_URL (or any other secret) via an external secret, so it doesn't have to be hardcoded in values.yaml

Use cases

To support rotating the secrets securely it should be possible to provide all of them as an externally managed secret and mounted as env vars via envFrom.secretRef notation.

Proposed solution

Mount external secret via envFrom notation.

Is there a workaround to accomplish this today?

No.

References and prior work

Any proper helm chart.

Clean up legacy binaries

We have moved to using the chart-releaser CI job to automatically create releases for new versions of the op-scim Chart. Since then the path to obtain the has moved from https://raw.githubusercontent.com/1password/op-scim-helm/main to https://1password.github.io/op-scim-helm and the binaries are attached to the releases.

We still have the legacy binaries (ex. op-scim-2.4.1.tgz) in the root of the project. These should be cleaned up once they are no longer used and folks have had a chance to update their Helm repo settings.

A good point to do that would be the next minor release of the SCIM bridge, i.e. v2.5.0 (or later).

We should also remove the legacy index.yaml from the root of this project.

Set reasonable default taints and tolerations

We should set reasonable defaults for the taints and tolerations to help ensure that the SCIM bridge and Redis are preferably not scheduled on the same node. The aim of this is to help ensure that the SCIM bridge and Redis affect each other as little as possible, and improve reliability.

Bad documentation causes issues when deploying.

Hi, most of the details in the template doesn't impact this issue, the only important thing would be the chart version which is 2.9.3.

The issue itself is that you need to set credentialsVolume: false if you want to use credentialsSecrets. Else it generates the template as in the screenshot below
image

Add support for namespaceOverride

Currently we set many of the namespace values to .Release.Namespace. We should allow users to override the namespace by setting .Values.namespaceOverride so that they can specify a different namespace for the release and that of the resources deployed with this chart.

See the common.names.namespace template from bitnami/common.

Add option to store scimsession in Secret instead of Persistent Volume

By default the helm chart deploys the SCIM Bridge with a persistent volume claim to facilitate our various cloud provider marketplace 1-Click setup flow for providing the 1Password credentials to the server. Few cloud providers support ReadWriteMany as a method for persistent volume claims which prevents this chart being used for HA deployments, or zero downtime upgrades.

When using the chart directly we should provide options for the source of the volume mount instead of relying on the persistent volume for use cases where the deployment is not using the 1-Click setup. The volume mount could be via a K8s Secret resource created via the chart or via Kubernetes Secrets Store CSI Driver.

Can't override storageClass to gp2

Hello,

When looking at the latest release, if I provide an override of storageClass: gp2 it's never applied. This seems to be missing from the latest release.

Thanks.

Expand Supported Configuration Options

The current helm chart lacks the flexibility expected for doing deployments into custom clusters. The chart should be updated to include configuration values (and secure defaults) for:

  • Annotations
  • Node Selector definitions
  • Pod Affinity definitions
  • Desired number of instances
  • Image Pull Policy
  • Image Pull Secrets
  • Security Context
  • Resource limits for the pod
  • Persistent Volume Storage Class name
  • Liveness and Readiness Probes
  • Configurable Redis Hostname
  • Ingress Options
    • Current default is load balancer
    • Allow for ingress rules to be defined outside of this chart
  • Configuration for TLS termination at Cluster ingress or Service
  • Pod Tolerations
  • Pod Disruption Budget

Examples of all these options can be seen in either the Connect Helm Chart or this closed PR

Manage Secret externally

We would like to manage our secret for scimsession via external tooling (terraform). So we need to inject the proper secret into the pod/deployment but we do not want the chart to manage the secret as we then would have to store the credential either in the value files or the helm arguments when calling.

However there is currently one single setting that manages both parts:

  • create/manage the Secret object
  • inject the secret into the Pod/Deployment
scim:
  credentialsSecrets:
    scimsession:
      key: scimsession

Use Upstream Redis Dependency

Currently the chart uses a custom Redis definition with minimal configuration for options like clustering and pod disruption budgets. We should move to using a maintained Redis chart with defaults that are similar to our current usage.

Adding a dependency to op-scim-bridge/templates/op-scim.yaml will allow for users to refer to that charts documentation on how to configure Redis beyond our default values.

dependencies:
   - name: redis
     version: 12.0.0
     repository: https://charts.bitnami.com/bitnami
     condition: redis.enabled

Using this configuration would result in a similar configuration to the current default

# -- Redis configuration, check [the upstream configuration](https://github.com/bitnami/charts/blob/master/bitnami/redis)
 redis:
   enabled: true
   image:
     registry: docker.io
     repository: bitnami/redis
     tag: 6.0.9-debian-10-r13
     pullPolicy: IfNotPresent
   cluster:
     enabled: false
   usePassword: false

Enforce EOF new line with Helm linter

We should add a trailing newline at the end of the yaml files and check for its existence with the linter.

This is important as there are parsers that don't like files without it.

value_json scimsession issue

Your environment

Chart Version:
2.9.1

Helm Version: 3.10.1

Kubernetes Version: 1.21

What happened?

When passing raw JSON scimsession data via "scim.credentialsSecrets.value_json" the helm template "secret.yaml " does a base64 encode before creating the Kubernetes Secret. Kubernetes natively does a base64 encode on the data when storing the secret which ends up with it double encoded. The end result is that inside the op-scim-bridge container, the environment var "OP_SESSION" is set to the base64 encoded scimsession data. Doing this, I am never able to get my bearer token to validate.

If I manually update the Kubernetes secret to be raw json scimsession data (resulting in OP_SESSION=<raw_JSON_scim_session>) everything works flawlessly.

Can someone tell me what I am missing? Does the op-scim code even support OP_SESSION value being base64 encoded?

Reference:

{{- $secret.value_json | b64enc | indent 2 }}

PR that added this: #64

What did you expect to happen?

scimsession passed via value_json should work as expected

Steps to reproduce

  1. normal helm deployment using "scim.credentialsSecrets.value_json" with raw JSON data
  2. observe that OP_SESSION inside the container is base64 encoded
  3. try to validate a bearer token against your session
  4. manually update the kubernetes secret to contain the raw scimsession JSON.
  5. restart the pod
  6. try to validate a bearer token against your session

Notes & Logs

Add redis dependency for chart-releaser

With the introduction of the chart-releaser we had a pipeline failure, with the error message being:

Error: no repository definition for https://charts.bitnami.com/bitnami

The root cause of the error seems to relate to the op-scim Chart dependency not being satisfied at the point that the chart-releaser job packages the op-scim Chart for release.

After looking up the error a bit I found a thread describing the exact same issue we're seeing.

This issue is to implement a fix as suggested in a comment on the aforementioned thread.

Error: unbound immediate PersistentVolumeClaims (hardcoded `do-block-storage`)

Using all the defaults in values.yaml, I keep getting the 1 pod has unbound immediate PersistentVolumeClaims error message

Made sure .domain is an empty string to disable Let's Encrypt (1Password/scim-examples#104)

$ jq 'keys' scimsession  
[
  "deviceUuid",
  "domain",
  "encCredentials",
  "healthVerifier",
  "verifier",
  "version"
]
$ cat scimsession | jq .domain
""

Create secret

$ kubectl create secret generic scimsession --namespace 1password-scim --from-file=scimsession
secret/scimsession created

$ kubectl get secrets scimsession --namespace 1password-scim
NAME          TYPE     DATA   AGE
scimsession   Opaque   1      13s

Install latest op-scim helm chart

$ wget https://github.com/1Password/op-scim-helm/raw/main/op-scim-2.0.2.tgz
$ helm install 1password-scim op-scim-2.0.2.tgz --namespace 1password-scim
NAME: 1password-scim
LAST DEPLOYED: Wed Jun  2 21:40:38 2021
NAMESPACE: 1password-scim
STATUS: deployed
REVISION: 1
TEST SUITE: None

Check if it's deployed correctly

$ kubectl --namespace 1password-scim get pods
NAME                              READY   STATUS    RESTARTS   AGE
op-scim-bridge-7bd959c5b4-bfksc   0/1     Pending   0          3m55s
redis-bb88d7466-4kwhh             1/1     Running   0          3m55s

Look to see if there are any alarming events

$ kubectl --namespace 1password-scim describe pod op-scim-bridge-7bd959c5b4-bfksc
Name:           op-scim-bridge-7bd959c5b4-bfksc
Namespace:      1password-scim
Priority:       0
Node:           <none>
Labels:         app.kubernetes.io/application=op-scim-bridge
                app.kubernetes.io/component=op-scim-bridge
                pod-template-hash=7bd959c5b4
Annotations:    kubernetes.io/psp: eks.privileged
Status:         Pending
IP:             
IPs:            <none>
Controlled By:  ReplicaSet/op-scim-bridge-7bd959c5b4
Init Containers:
  scimuser-home-permissions:
    Image:      alpine:3.12
    Port:       <none>
    Host Port:  <none>
    Command:
      /bin/sh
      -c
    Args:
      mkdir -p /home/scimuser && chown -R 999 /home/scimuser
    Environment:  <none>
    Mounts:
      /home from op-scim-bridge-scimsession (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-tn2nk (ro)
Containers:
  op-scim-bridge:
    Image:       1password/scim:v2.0.2
    Ports:       8080/TCP, 8443/TCP
    Host Ports:  0/TCP, 0/TCP
    Command:
      /op-scim/op-scim
    Environment:
      OP_PORT:       8080
      OP_SESSION:    /home/scimuser/scimsession
      OP_REDIS_URL:  redis://op-scim-bridge-redis-svc:6379
    Mounts:
      /home from op-scim-bridge-scimsession (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-tn2nk (ro)
Conditions:
  Type           Status
  PodScheduled   False 
Volumes:
  op-scim-bridge-scimsession:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  op-scim-bridge-pvc
    ReadOnly:   false
  default-token-tn2nk:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-tn2nk
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason            Age                  From               Message
  ----     ------            ----                 ----               -------
  Warning  FailedScheduling  4m14s                default-scheduler  0/2 nodes are available: 2 pod has unbound immediate PersistentVolumeClaims.
  Warning  FailedScheduling  86s (x4 over 4m14s)  default-scheduler  0/2 nodes are available: 1 Too many pods, 1 pod has unbound immediate PersistentVolumeClaims.

Check for logs from the container and there aren't any.

$ kubectl --namespace 1password-scim logs op-scim-bridge-7bd959c5b4-bfksc op-scim-bridge
$

Check pvc

$ kubectl --namespace 1password-scim describe pvc
Name:          op-scim-bridge-pvc
Namespace:     1password-scim
StorageClass:  do-block-storage
Status:        Pending
Volume:        
Labels:        app.kubernetes.io/application=op-scim-bridge
               app.kubernetes.io/component=op-scim-bridge
               app.kubernetes.io/managed-by=Helm
Annotations:   meta.helm.sh/release-name: 1password-scim
               meta.helm.sh/release-namespace: 1password-scim
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:      
Access Modes:  
VolumeMode:    Filesystem
Mounted By:    op-scim-bridge-7bd959c5b4-bfksc
Events:
  Type     Reason              Age               From                         Message
  ----     ------              ----              ----                         -------
  Warning  ProvisioningFailed  0s (x3 over 27s)  persistentvolume-controller  storageclass.storage.k8s.io "do-block-storage" not found

Edit: Hmmmm ๐Ÿค” so this requires the do-block-storage which requires a Digital Ocean api key ?

$ kubectl --namespace 1password-scim describe storageclass do-block-storage
Error from server (NotFound): storageclasses.storage.k8s.io "do-block-storage" not found

It looks like the kubernetes instructions in scim-examples don't even use the do-block-storage... perhaps I'll have better luck there.

cc: @alicethorne-ab

Update redis common configuration

With the upgrade to bitnami/redis v16 we missed a couple of new configuration options.

We should ensure the following are set correctly using common configuration parameters:

  • disable auth
  • maxmemory
  • maxmemory-policy

Add issue templates

We should add issue templates to help guide folks submitting issues against this project.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.