Git Product home page Git Product logo

sm's Introduction

Storage Manager for XenServer

Build Status Coverage Status

This repository contains the code which forms the Storage Management layer for XenServer. It consists of a series of "plug-ins" to xapi (the Xen management layer) which are written primarily in python.

sm's People

Contributors

anoobs avatar benjamreis avatar bensimscitrix avatar chandrikas avatar cheng-z avatar edwintorok avatar franciozzy avatar gaborigloi avatar germanop avatar jonludlam avatar kostaslambda avatar letsboogey avatar liulinc avatar maelstrom96 avatar marksymsctx avatar martinjorge avatar maxcuttins avatar minli1 avatar peter-webbird avatar pritha-srivastava avatar qinz0822 avatar rosslagerwall avatar siddharthv avatar simonjbeaumont avatar stefanopanella avatar stormi avatar timsmithctx avatar wescoeur avatar ydirson avatar zli avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

sm's Issues

Support NFSv4+ only storage repositories

On servers where NFSv3 is disabled and NFSv4+ is enabled and working, rpcbind and showmount commands do not work, causing the nfs.py helper functions to falsely conclude that the server is unreachable.

When activating an SR configured for NFSv4+, check_server_tcp() and check_server_service() should probably be skipped, or maybe execute a simple check that 2049/tcp is open on the remote.

As discussed in xcp-ng/xcp#135

NFS v4.1, v4.2 support

Does sm support NFS v4.2?

If we look in https://github.com/xapi-project/sm/blob/master/drivers/nfs.py
There is a variable here saying 4.1:

    'nfsversion', 'for type=nfs, NFS protocol version - 3, 4, 4.1']

But get_supported_nfs_versions() only look for major, not minor versions. Does it mean it is missing to check if 4.1 is actually supported, or does it mean 4.2 should be supported too?

    """Return list of supported nfs versions."""  
    valid_versions = set(['3', '4'])  
    cv = set()  
    try:  
        ns = util.pread2([RPCINFO_BIN, "-p", "%s" % server])

Looking atrpcinfo -p it can only return major versions of NFS.

# cat /proc/fs/nfsd/versions
-2 -3 +4 +4.1 +4.2
# rpcinfo  -p localhost
   program vers proto   port  service
    100000    4   tcp    111  portmapper
    100000    3   tcp    111  portmapper
    100000    2   tcp    111  portmapper
    100000    4   udp    111  portmapper
    100000    3   udp    111  portmapper
    100000    2   udp    111  portmapper
    100024    1   udp  51368  status
    100024    1   tcp  36643  status
    100005    1   udp  34211  mountd
    100005    1   tcp  47985  mountd
    100005    2   udp  60375  mountd
    100005    2   tcp  50167  mountd
    100005    3   udp  50762  mountd
    100005    3   tcp  33111  mountd
    100003    4   tcp   2049  nfs

A reference issue is #30

PERC H965i and all Dell PERC 12 controllers are only supporting 4K sectors

Per this discussion: https://xcp-ng.org/forum/topic/8150/sr_backend_failure_78-vdi-creation-failed-opterr-error-22

I see 2 options:

  1. "Fixing" this in SMAPIv1 if it's not too complicated
  2. Going faster on SMAPIv3 to at least have a working local SR driver

We are preparing an "upstream" kickoff for SMAPIv3 and the priorities, but before doing that, I wanted to get a feeling on option 1, if it's doable or not at all (ie too much efforts). What's your opinion @MarkSymsCtx ?

GC issue preventing SR deletion with LVHDSR driver

We had an issue in our automated storage testing, where we create a LVHDSR, import a small VM, move it around in the pool (cold migration to second pool host then back to master, then live migration to second host and back to master).

What fails is our teardown phase when, after deleting the only VM on the SR, we try to delete the SR (xe sr-destroy). It fails because the SR is not empty.

The SR delete function does trigger a GC run, but it leaves one VDI behind:

Mar 22 19:22:31 r620-q2 SMGC: [29353] SR 4659 ('LVM-local-SR') (6 VDIs in 5 VHD trees):
Mar 22 19:22:31 r620-q2 SMGC: [29353]         *8341e744[VHD](2.000G//2.012G|n)
Mar 22 19:22:31 r620-q2 SMGC: [29353]             *53b46c10[VHD](2.000G//40.000M|n)
Mar 22 19:22:31 r620-q2 SMGC: [29353]         *e7657971[VHD](2.000G//2.012G|n)
Mar 22 19:22:31 r620-q2 SMGC: [29353]         *43b9baf6[VHD](2.000G//8.000M|n)
Mar 22 19:22:31 r620-q2 SMGC: [29353]         *080f8024[VHD](2.000G//8.000M|n)
Mar 22 19:22:31 r620-q2 SMGC: [29353]         *4f3fa3ba[VHD](2.000G//172.000M|n)
Mar 22 19:22:31 r620-q2 SMGC: [29353]
Mar 22 19:22:31 r620-q2 SMGC: [29353] Found 5 VDIs for deletion:
Mar 22 19:22:31 r620-q2 SMGC: [29353]   *53b46c10[VHD](2.000G//40.000M|n)
Mar 22 19:22:31 r620-q2 SMGC: [29353]   *e7657971[VHD](2.000G//2.012G|n)
Mar 22 19:22:31 r620-q2 SMGC: [29353]   *43b9baf6[VHD](2.000G//8.000M|n)
Mar 22 19:22:31 r620-q2 SMGC: [29353]   *080f8024[VHD](2.000G//8.000M|n)
Mar 22 19:22:31 r620-q2 SMGC: [29353]   *4f3fa3ba[VHD](2.000G//172.000M|n)

The parent of 53b46c10 is not selected for deletion.

If we re-attach the SR and force a new garbage collection (or wait for it to run once again, but we can't wait for so long in automated tests), then the last VDI is GCed too:

Mar 22 19:22:43 r620-q2 SMGC: [30330] SR 4659 ('LVM-local-SR') (1 VDIs in 1 VHD trees):
Mar 22 19:22:43 r620-q2 SMGC: [30330]         *8341e744[VHD](2.000G//2.012G|n)
Mar 22 19:22:43 r620-q2 SMGC: [30330]
Mar 22 19:22:43 r620-q2 SMGC: [30330] Found 1 VDIs for deletion:
Mar 22 19:22:43 r620-q2 SMGC: [30330]   *8341e744[VHD](2.000G//2.012G|n)

And then we can delete the SR properly.

Is this the expected behaviour of garbage collection, not removing the whole VHD tree even if the whole VHD is not useful anymore? If so, should it run several times when force-triggered by a SR delete, to ensure all removable VHDs are properly removed and SR delete doesn't fail?

NFSv4 support

I did connect to a CentOS 6.x machine using NFSv4 and the speeds of version 4 of the protocol are great.
I did test it modifying nfs.py and forcing to use mount.nfs4 and it did successfully monted a NFSv4 share.

partitions not properly wiped

In reports like xcp-ng/xcp#390 and https://xcp-ng.org/forum/topic/7465/upgrade-to-8-3-beta-1-fails we see users getting hit by a previous ZFS partition not being properly wiped.

util.zeroOut uses dd to clear explicit block ranges in a given block dev (which cannot account for all kinds of signatures in the wild), and tools like vgcreate are used in a mode that would ask the user what to do. Would there be any problem in calling wipefs -a on the target block device?

I can see it could be a bad idea to use --force on vgcreate, as it is quite an "override all checks" flag and there don't seem to be a flag to just ignore previous contents, although it could avoid any upcoming issues that would not be covered by the current wipefs version.

Missing branch `release/stockholm/lcm` and tag v2.29.1

Hi!

We noticed that there is no release/stockholm/lcm maintenance branch, nor the v2.29.1 tag which, I assume, would come from that branch.

Is this something that you intend to push later, or will this repository stop receiving maintenance updates for branches other than master?

Best regards,

Samuel Verschelde

Make sm works with IPv6

Hi all!

I'm trying to play with IPv6 and I'm facing issues when adding a NFS SR on an IPv6 only pool. Is it currently supported?
If not, what needs to be done?

Here's the error I'm facing:
SR_BACKEND_FAILURE_1200(, not all arguments converted during string formatting, )

Resolve IP family of hostname in NFSSR.py

HI!

I'm looking for ideas/help in how to resolve a dconf["server"] in a NFS device conf before loading the NFS driver on a SR.
If the server is a hostname, the transport will default to IPv4 even though the host could be configured in IPv6.
I'm looking at getaddrinfo but It can return both IPv4 & 6 for a hostname.
Is there a way to know how the PIF used for the NFS share is configured?

Thanks!

dev_requirements.txt points to old unusable pylint version

pylint 1.5 as installed by pip install -r dev_requirements.txt aborts with internal failures, on at least CentOS7 and Debian11.

OTOH pylint 2.13.9 does work, but reports a dozen of errors, so it might not be the recommended version either.

LVM and performance (locks)

Hi!

I have several questions about this lock:
https://github.com/xapi-project/sm/blob/master/drivers/lvutil.py#L183-L186

  • Why always use the same path (/var/lock/sm/.nil/lvm) for all volumes and LVM SRs? For example, is it possible to use a lock path like /var/lock/sm/lvm/e9722947-a01a-8417-1edf-e015693bb7c9/cd9d03a6-ac19-42cf-9e8b-8625c0fa029b' instead? When a command like this is executed:
['/sbin/lvchange', '-ay', '/dev/VG_XenStorage-e9722947-a01a-8417-1edf-e015693bb7c9/VHD-cd9d03a6-ac19-42cf-9e8b-8625c0fa029b']
  • I'm not sure to understand the goal of this lock, there is already a lock in the smapi to protect the SR and specific locks regarding the VDIs, isn'it?

I ask these questions because in some cases, for example when several hundred snapshot commands are executed the performances quickly become disastrous. Looking at the execution of a single snapshot, it can take 3m30 including 1m50 lost because of this lock (because many parallel commands are executed). Would it be complicated and feasible to have a more specific lock for each VDI/SR?

Thank you!

IntelliCache and SMB SR

Just a small question regarding IntelliCache and SMB, I don't understand why the SR_CACHING capability is present on this SR:

CAPABILITIES = ["SR_PROBE", "SR_UPDATE", "SR_CACHING",

Because only the NFS type is involved during the cache configuration:

shared_target = NFSSR.NFSFileVDI(self.target.vdi.sr, parent_uuid)

Thank you!

MSA2050/2060 MP support

I've seen that multipath/multipath.conf got a few upgrades.
d732cf1 is the one that would affect us to properly an MSA 2060 FC / SAN.

I'm running 8.2 with current updates, but that config update didn't make it yet.
# multipathd show config
doesn't show it yet, so when will it make it? The 2050 is rather old by now and even the 2060 isn't 'fresh'.

(I've installed the sm-patch with CHV before and it also didn't appear under 8.2 CU1)

SMAPIv3 community kickoff

Hi everyone! ๐Ÿ‘‹

We'd like to announce a public kickoff meeting related to SMAPIv3, under the aegis of the XAPI Project (inside Xen Project, inside LF). It will be related to the open part: the API and related architecture. Since we'll develop many different open source storage plugins, it's very likely we'll bring modification/improvements to the API and components. So it's better to do it collaboratively.

We already had some discussions about this with @edwintorok during the last Xen Summit, but this time we will bring a list of features and requirements that we identified on our side, that we'd like to complete with you guys. Then, we could split the work into smaller pieces to actually deliver something sooner than later.

Everyone is welcome obviously :) We'd like to target a first meeting somewhere next week (on Jitsi), 30min to list the prerequisites and target is reasonable to start with. If you have some preferences regarding the time slot, let me know, we are pretty flexible. Here is a first list of possible schedule:

  • Tuesday afternoon (23rd) after 2PM UTC
  • Wednesday morning (24th) after 10AM UTC or afternoon after 3PM UTC
  • anytime Thursday or Friday (25/26th)

Let us know and see you there :)

set archive=1 in /etc/lvm/lvm.conf?

From /etc/lvm/lvm.conf:

backup {
        # Configuration option backup/archive.
        # Maintain an archive of old metadata configurations.
        # Think very hard before turning this off. 
        archive = 0

Note it says think very hard before turning this off
Why is archive off in xcp-ng?

Adding this to the SM tracker based on advice in xcp-ng/xcp#619

update tests/README

Mate, is it possible to update tests/README to reflect the current status of testing in SM? E.g., it would be nice to know how can one run these tests locally, whether they are automatically run by some CI system etc.

Missing tags?

Hi,

I think when the state of this repository was updated with the python 3 port, the tags were not pushed. Maybe simply a missing git push --tags?

Remove OCFS SR implementations

The codebase includes partial implementations of OCFS based SRs for iSCSI and HBA. These are not active in the product (no symlinks are created) and are entirely unmaintained. In the interests of reducing technical debt in the code base and increasing the general levels of code coverage from Unit Testing it would be desirable to remove these SR types from the code base.

enable issue_discards=1 in /etc/lvm/lvm.conf?

Several users have been asking us at XCP-ng to set issue_discards to 1 instead of 0 by default in lvm.conf. One of them having used it on all their XenServer and XCP-ng servers for long.

See xcp-ng/xcp#123

This would only impact storage that handles discards, and would benefit to at least a significant part of them such as SSDs or Ceph storage.

Why is it set to 0 by default?

One of the users assumes that it may be related to hardware that suffers from performance impacts when discards are issued (based on the following extract from Citrix Hypervisor's documentation) and wonders if current days storage hardware still suffers from such penalty.

Note: Reclaiming freed space is an intensive operation and can affect the performance of the storage array. You should perform this operation only when space reclamation is required on the array. Citrix recommends that you schedule this work outside the peak array demand hours

nfs sr-probe fails with QNAP NFS devices

Hi!

We had numerous reports of users having issues to create an NFS SR when using QNAP devices.

So it seems the problem is because the probe request (even on xe) doesn't return anything, eg:

xe sr-probe type=nfs device-config:server=192.168.0.15
Error code: SR_BACKEND_FAILURE_101
Error parameters: , The request is missing the serverpath parameter,

It should be:

xe sr-probe type=nfs device-config:server=192.168.0.15
Error code: SR_BACKEND_FAILURE_101
Error parameters: , The request is missing the serverpath parameter,
<nfs-exports>
	<Export>
		<Target>192.168.0.15</Target>
		<Path>/foo/bar</Path>
		<Accesslist>(everyone)</Accesslist>
	</Export>
</nfs-exports>

After some investigation, the common point was using a QNAP device, and having showmount to return share lists without any "permissions". Eg, an expected showmount would be:

# showmount -e 10.0.1.197
Export list for 10.0.1.197:
/mnt/tank/backups/Xen 10.0.1.0
/mnt/tank/ISO         10.0.1.1
/mnt/tank/home/kevdog 10.0.1.1

But with QNAP devices, for example:

# showmount -e 172.18.9.34
Export list for 172.18.9.34:
/vm      
/Public  
/Web     

I suppose this is the reason of the issue: in NFS probe function, it expects to have another row with the permission (IP or "everyone"). QNAP told us there's no possibility to modify permission for the NFS, so we can't test on that side.

Problem might be around here:

sm/drivers/nfs.py

Lines 187 to 216 in 46a8c7a

def scan_exports(target):
"""Scan target and return an XML DOM with target, path and accesslist."""
util.SMlog("scanning")
cmd = [SHOWMOUNT_BIN, "--no-headers", "-e", target]
dom = xml.dom.minidom.Document()
element = dom.createElement("nfs-exports")
dom.appendChild(element)
for val in util.pread2(cmd).split('\n'):
if not len(val):
continue
entry = dom.createElement('Export')
element.appendChild(entry)
subentry = dom.createElement("Target")
entry.appendChild(subentry)
textnode = dom.createTextNode(target)
subentry.appendChild(textnode)
(path, access) = val.split()
subentry = dom.createElement("Path")
entry.appendChild(subentry)
textnode = dom.createTextNode(path)
subentry.appendChild(textnode)
subentry = dom.createElement("Accesslist")
entry.appendChild(subentry)
textnode = dom.createTextNode(access)
subentry.appendChild(textnode)
return dom

It might be doable to manage this case when there's no ACLs visible for a share. At least, that's my theory, your input is welcome ๐Ÿ‘

Invalid iSCSI sessions handling when LUN is not available through all targets on portal

Given iSCSI portal with multiple targets, where LUN is available only on one target, eg. targetd configured as follows:

{
  "fabric_modules": [],
  "storage_objects": [
    {
      "dev": "/var/lib/target/target1",
      "name": "var-lib-target-target1",
      "plugin": "fileio",
      "size": 10485760,
      "write_back": false,
      "wwn": "bf00cf22-a354-4a72-b51b-2d29637c26b8"
    }
  ],
  "targets": [
    {
      "fabric": "iscsi",
      "tpgs": [
        {
          "enable": true,
          "luns": [
            {
              "index": 0,
              "storage_object": "/backstores/fileio/var-lib-target-target1"
            }
          ],
          "node_acls": [
            {
              "mapped_luns": [
                {
                  "index": 0,
                  "tpg_lun": 0,
                  "write_protect": false
                }
              ],
              "node_wwn": "iqn.2012-06.com.example:initiator0"
            }
          ],
          "portals": [
            {
              "ip_address": "192.168.0.100",
              "iser": false,
              "port": 3260
            }
          ],
          "tag": 1
        }
      ],
      "wwn": "iqn.2012-06.com.example:target0"
    },
    {
      "fabric": "iscsi",
      "tpgs": [
          "luns": [],
          "node_acls": [
              "mapped_luns": [],
              "node_wwn": "iqn.2012-06.com.example:initiator0"
            }
          ],
          "portals": [
            {
              "ip_address": "192.168.0.100",
              "iser": false,
              "port": 3260
            }
          ],
          "tag": 1
        }
      ],
      "wwn": "iqn.2012-06.com.example:target2"
    }
  ]
}

full configuration JSON

This configuration emulates behavior of Dell Compellent storage.

When adding SR with wildcard Target IQN for multipathing support, operation fails with
The SR is not available [opterr=Error reporting error, unknown key Device not appeared yet] and LUN offline or iscsi path down due to incorrect iSCSI sessions handing.

Manually reconnecting (logging in) sessions results in operation success.
Following (completely incorrectly implemented) patch "fixes" this issue, but highly affects storage operations execution time:

diff --git a/drivers/LVHDoISCSISR.py b/drivers/LVHDoISCSISR.py
index 7562f36..ed4a439 100755
--- a/drivers/LVHDoISCSISR.py
+++ b/drivers/LVHDoISCSISR.py
@@ -145,6 +145,7 @@ class LVHDoISCSISR(LVHDSR.LVHDSR):
                     srcmd_copy.dconf['multiSession'] = IQNstring
                     util.SMlog("Setting targetlist: %s" % srcmd_copy.dconf['targetlist'])
                     self.iscsiSRs.append(BaseISCSI.BaseISCSISR(srcmd_copy, sr_uuid))
+                util.doexec(['iscsiadm', '-m', 'node', '--loginall=all'])
                 pbd = util.find_my_pbd(self.session, self.host_ref, self.sr_ref)
                 if pbd <> None and not self.dconf.has_key('multiSession'):
                     dconf = self.session.xenapi.PBD.get_device_config(pbd)

Save time of last GC in XAPI

Somehow related to #524

What do you think of storing the last timestamp of when a GC was triggered on a specific SR? (like a new field in XAPI or just a record in other_config).

This might be helpful to track discrepancies between a chain that have to be coalesce and last time it was done (and likely detects SR issues before it's too late!)

XenError dies if no XML file on the filesystem

If the filesystem doesn't have the:
/opt/xensource/sm/XE_SR_ERRORCODES.xml
file, it fails in a very ugly way:

on xs64bit:

======================================================================
ERROR: test_mount (test_NFSSR.TestNFSSR)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/storage-manager/sm/tests/testlib.py", line 279, in decorated
    result = func(self, context, *args, **kwargs)
  File "/storage-manager/sm/tests/test_NFSSR.py", line 34, in test_mount
    sr = create_nfs_sr()
  File "/storage-manager/sm/tests/test_NFSSR.py", line 27, in create_nfs_sr
    sr = NFSSR.NFSSR(command, '0')
  File "/storage-manager/sm/drivers/SR.py", line 139, in __init__
    self.load(sr_uuid)
  File "/storage-manager/sm/drivers/NFSSR.py", line 75, in load
    raise xs_errors.XenError('ConfigServerMissing')
  File "/storage-manager/sm/drivers/xs_errors.py", line 32, in __init__
    raise Exception.__init__(self, '')
TypeError: TypeError: unbound method __init__() must be called with Exception instance as first argument (got XenError instance instead)

on trunk:

======================================================================
ERROR: test_mount (test_NFSSR.TestNFSSR)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/storage-manager/sm/tests/testlib.py", line 279, in decorated
    result = func(self, context, *args, **kwargs)
  File "/storage-manager/sm/tests/test_NFSSR.py", line 34, in test_mount
    sr = create_nfs_sr()
  File "/storage-manager/sm/tests/test_NFSSR.py", line 27, in create_nfs_sr
    sr = NFSSR.NFSSR(command, '0')
  File "/storage-manager/sm/drivers/SR.py", line 139, in __init__
    self.load(sr_uuid)
  File "/storage-manager/sm/drivers/NFSSR.py", line 75, in load
    raise xs_errors.XenError('ConfigServerMissing')
  File "/storage-manager/sm/drivers/xs_errors.py", line 32, in __init__
    raise Exception.__init__(self, '')
TypeError: TypeError: exceptions must be classes, instances, or strings (deprecated), not NoneType

So we should come up with a different fix, something like:

--- a/drivers/xs_errors.py
+++ b/drivers/xs_errors.py
@@ -28,8 +28,7 @@ class XenError(object):
     def __init__(self, key, opterr=None):
         # Check the XML definition file exists
         if not os.path.exists(XML_DEFS):
-            print "No XML def file found"
-            raise Exception.__init__(self, '')
+            raise Exception("No XML def file found")

         # Read the definition list
         self._fromxml('SM-errorcodes')

In this case you get the output:

======================================================================
ERROR: test_mount (test_NFSSR.TestNFSSR)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/storage-manager/sm/tests/testlib.py", line 279, in decorated
    result = func(self, context, *args, **kwargs)
  File "/storage-manager/sm/tests/test_NFSSR.py", line 34, in test_mount
    sr = create_nfs_sr()
  File "/storage-manager/sm/tests/test_NFSSR.py", line 27, in create_nfs_sr
    sr = NFSSR.NFSSR(command, '0')
  File "/storage-manager/sm/drivers/SR.py", line 139, in __init__
    self.load(sr_uuid)
  File "/storage-manager/sm/drivers/NFSSR.py", line 75, in load
    raise xs_errors.XenError('ConfigServerMissing')
  File "/storage-manager/sm/drivers/xs_errors.py", line 31, in __init__
    raise Exception("No XML def file found")
Exception: No XML def file found

Which now makes sense

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.