Git Product home page Git Product logo

xcsf-dev / xcsf Goto Github PK

View Code? Open in Web Editor NEW
28.0 5.0 12.0 48.03 MB

XCSF learning classifier system: rule-based online evolutionary machine learning

License: GNU General Public License v3.0

C 86.73% C++ 5.70% CMake 1.40% Python 6.17%
learning-classifier-systems xcsf xcs evolutionary-algorithms reinforcement-learning supervised-learning stochastic-gradient-descent neural-networks neuroevolution unsupervised-learning

xcsf's Introduction

XCSF learning classifier system

An implementation of the XCSF learning classifier system that can be built as a stand-alone binary or as a Python library. XCSF is an accuracy-based online evolutionary machine learning system with locally approximating functions that compute classifier payoff prediction directly from the input state. It can be seen as a generalisation of XCS where the prediction is a scalar value. XCSF attempts to find solutions that are accurate and maximally general over the global input space, similar to most machine learning techniques. However, it maintains the additional power to adaptively subdivide the input space into simpler local approximations.

See the project wiki for details on features, how to build, run, and use as a Python library.


License Linux Build MacOS Build Windows Build Latest Version DOI

Codacy CodeFactor SonarCloud codecov Lines of Code

PyPI package Python versions Downloads

xcsf's People

Contributors

dependabot[bot] avatar dpaetzel avatar pre-commit-ci[bot] avatar rpreen avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

xcsf's Issues

Improve docs of `EarlyStoppingCallback`

I wasn't sure whether I'm allowed to edit the wiki (which is maybe a bit too short in explaining the exact semantics of the new callback mechanism) and also wanted to check whether I got this right (since I'm only interested in supervised learning right now, this is not as abstract as it could be, sorry):

  • Callbacks are called every PERF_TRIALS trials (let's call this an epoch). E.g. up to MAX_TRIALS/PERF_TRIALS times (if this is an integer, otherwise appropriately rounded).
  • The EarlyStoppingCallback checks either a training or a validation data metric as given by its monitor parameter (e.g. "train").
  • Which kind of metric EarlyStoppingCallback checks depends on loss_func (metric computation is done in xcs_supervised_fit) since the metric value is the mean loss_func value on the (train or validation) data during the PERF_TRIALS performed (i.e. during the epoch).
  • The patience parameter to EarlyStoppingCallback is measured at the trial level and not at the epoch level.

If I got this correct, I can edit the wiki accordingly, if you want. Or maybe just link to this issue for now?

`random_state` parameter does not conform to sklearn interface

This is a minor nuisance but I though I'd document it anyways.

The sklearn interface wants an estimator's random_state argument to be None by default (corresponding to not seeding the RNG) and typically to accept values >= 0 as valid seeds.

However,

In [3]: xcsf.XCS().get_params()["random_state"]
Out[3]: 0

(should return None) as well as

In [4]: xcsf.XCS(random_state=None)
Error: unable to import parameter: random_state

(should work).

Kernel crash on Windows

Hi guys,

thank you for putting in so much time in this project. I am just encountering issues when i try to train the model as it crasehs my Python kernel. Sometimes it seems to magically work but i am not sure what i am doing wrong. Bellow i have some screenshots from my jupyter notebook where the problems occures.

Getting the data ready
image
The model defined for a supervised learning task
image
The part where the kernel crashes! When i use the .fit method
image

I have gone over the inistilation requirements and everything seems to be installed properly, however it does not work.

I am running it on Windows 11 python version 3.10.10

Any help would be greatly apreciated.

Kind regards,
Boris

Return error strings instead of hard exiting

At the moment there are various places where an error message is printed to stdout and exit(EXIT_FAILURE) is called. It would be better if a string containing an error message was returned by the function (or NULL if successful) so that the calling function can decide how to handle the error. By allowing this to propagate back up to the Python wrapper it could throw an exception with the error instead of terminating; stdout doesn't seem to be captured by jupyter notebooks for example.

add random number seed parameter

Currently, the random number seed is set via a Python function def seed(value: int) -> None. Changing this to a parameter random_state will allow it to be set through JSON and therefore the stand-alone executable; it will also enable being set via a future set_params() and constructor args, making it more similar to sklearn.

Make option setting and getting symmetrical (Python bindings)

Currently, some options are set using strings but when their value is read, an integer is returned:

from xcsf.xcsf import XCS

xcs = XCS(1,1,1)
print(xcs.EA_SELECT_TYPE) # prints 0
xcs.EA_SELECT_TYPE = "roulette"
print(xcs.EA_SELECT_TYPE) # prints 0
xcs.EA_SELECT_TYPE = "tournament"
print(xcs.EA_SELECT_TYPE) # prints 1

I think the library would benefit from having symmetrical option setting and getting. For example, I'm currently preparing experiments as follows.

A have a module that interacts with the XCSF library for me, this module contains e.g.

def default_xcs_params():
    xcs = xcsf.XCS(1, 1, 1)
    return get_xcs_params(xcs)

def get_xcs_params(xcs):
    return {
        "OMP_NUM_THREADS": xcs.OMP_NUM_THREADS,
        "POP_INIT": xcs.POP_INIT,
        ...
        "EA_SELECT_TYPE": xcs.EA_SELECT_TYPE,
        ...
    }

def set_xcs_params(xcs, params):
    xcs.OMP_NUM_THREADS = params["OMP_NUM_THREADS"]
    xcs.POP_INIT = params["POP_INIT"]
    ...
    xcs.EA_SELECT_TYPE = params["EA_SELECT_TYPE"]
    ...

In my experiment code I want to override only the relevant settings (without
accessing the XCS() object directly):

params = default_xcs_params() | {
    "MAX_TRIALS" : 100000,
    "POP_SIZE": 100,
}

However, that does not work currently because get_xcs_params returning ints for options that set_xcs_params expects to be strs.

I haven't found the time to look into how hard it would be to have the Python bindings return strings.

`XCS().get_params()` does not contain the action config

First of all: Thank you for adding the sklearn interface! This helps a lot! 🙂

I just noticed that XCS().get_params() returns

{'version': '1.3.1', 'x_dim': 1, 'y_dim': 1, 'n_actions': 1, 'omp_num_threads': 8, 'random_state': 0, 'population_file': '', 'pop_init': True, 'max_trials': 100000, 'perf_trials': 1000, 'pop_size': 2000, 'loss_func': 'mae', 'set_subsumption': False, 'theta_sub': 100, 'e0': 0.01, 'alpha': 0.1, 'nu': 5, 'beta': 0.1, 'delta': 0.1, 'theta_del': 20, 'init_fitness': 0.01, 'init_error': 0, 'm_probation': 10000, 'stateful': True, 'compaction': False, 'ea': {'select_type': 'roulette', 'theta_ea': 50, 'lambda': 2, 'p_crossover': 0.8, 'err_reduc': 1, 'fit_reduc': 0.1, 'subsumption': False, 'pred_reset': False}, 'condition': {'type': 'hyperrectangle_csr', 'args': {'eta': 0, 'min': 0, 'max': 1, 'spread_min': 0.1}}, 'prediction': {'type': 'nlms_linear', 'args': {'x0': 1, 'eta': 0.1, 'evolve_eta': True, 'eta_min': 1e-05}}}

which does not include the action dictionary. I'd expect

XCS().get_params()["action"]

to return a dict (the default which is probably {"type": "integer"}?) but it throws a KeyError.

Interestingly, this works:

xcsf.XCS(action={"type" : "integer"}).get_params()["action"]

and returns correctly {"type" : "integer"}.

File saving/loading neural layer parameters

Neural network initialisation arguments/parameters are not written to the output file for persistent storage with save(). This may cause problems if load() is called without setting the same layer arguments.

Performing `xcs.predict(X)` alters the model when it should not (supervised learning)

Hi! 🙂

When doing supervised learning, I'd expect model.predict(X) not to change model. However, xcs.predict(X) does sometimes change the model. This is especially problematic for large, high dimensional data sets.

Why is the model even changed by xcs.predict(X)? Due to how XCSF performs covering: The classifiers created by covering are simply added to the population (and correspondingly other classifers are deleted).

In my opinion, this is generally undesirable when doing supervised learning where users expect model.predict(X) not to alter the model. Also, in most cases, supervised learning fits the model once and from then on only performs predictions which means that there will never be a fitness signal for these newly created classifiers.

This is especially problematic for high dimensional data where new data is with only a low probability matched by existing classifiers. In my case, I had a large test data set (50000 test data points, 20 dimensions) and upon doing xcs.predict(X) the existing, fitted, population essentially got erased (all classifiers were replaced by random new classifiers with experience 0 and a correspondingly bad fit). While I'd expect the predictions to be bad in this case, I would definitely not expect the model's state to be erased.

How could this be solved? I guess one way to approach this while keeping the overall XCSF character would be to, upon xcs.predict(X), generate covering classifiers as necessary to perform the prediction but to not put them into the population. Another way would be to add a default rule which matches everything and predicts the data mean or something like that.

What do you think?

`spread_min` of `hyperectangle_csr` is not respected (Nested config problem?)

The spread_min setting is somehow not respected. For example,

import xcsf
import json

model = xcsf.XCS(max_trials=10, condition={"args": {"spread_min": 0.5}})

pop = json.loads(model.json())["classifiers"]

spreads = [rule["condition"]["spread"][0] for rule in pop]
print(np.min(spreads))

prints 0.10028… but I would expect this to be a number greater than 0.5.

Interestingly, the spread_min does not get propagated to the internal_params upon initialization

print(model.get_params()["condition"]["args"]["spread_min"])
# 0.5
print(model.internal_params()["condition"]["args"]["spread_min"])
# 0.1

and also not after during fitting

import numpy as np
X = np.random.random((100, 1))
y = np.random.random(100)
model.fit(X, y)
print(model.internal_params()["condition"]["args"]["spread_min"])
# 0.1

Since eta is not propagated through either, this may have something to do with the nested dict in the condition parameter?

model = xcsf.XCS(max_trials=10, condition={"args": {"eta": 0.5}})
print(model.get_params()["condition"]["args"]["eta"])
# 0.5
print(model.internal_params()["condition"]["args"]["eta"])
# 0

Bug in serialising population filename

When calculating the length of the population filename with strnlen() the null terminator is not included:

size_t len = strnlen(xcsf->population_file, MAX_LEN);

This prevents serialisation from loading a saved xcsf, for example Python will produce the following error:

UnicodeDecodeError: 'utf-8' codec can't decode byte 0xb0 in position 140: invalid start byte

len needs to be + 1

Dictionary “order” is relevant when defining condition parameters

This is somewhat related to #111; I think the fix for that may have resulted in a new bug: It is now suddenly important to provide the "type" key as the first item in the dictionary. However typical Python dictionaries do not have a way of making something the first item reliably because they're unordered. The following does not work in most cases:

import numpy as np
import xcsf
import json

model = xcsf.XCS(max_trials=10, condition={"args": {"spread_min": 0.5}, "type" : "hyperrectangle_csr"})

pop = json.loads(model.json())["classifiers"]

spreads = [rule["condition"]["spread"][0] for rule in pop]
print(np.min(spreads))

Note that the last three lines of code are not even reached because the whole Python process gets killed by the No condition type has been specified: cannot set params error. Is that intended? (This may be a separate issue, though.)

accept 1-D inputs to fit / predict / score as well as 2-D

Currently, all X and Y must be 2-D arrays for the Python supervised fit() predict() and score() functions. This means that arrays have to be reshaped when necessary; for example, a 1-D y must call y = y.rehspae(-1,1) Accepting 1-D arrays where each value corresponds to a different sample will help clean up calling code and help make it more compatible with sklearn.

`predict` not deterministic

Hi, I may have found a bug in predict. I'd expect predict to be deterministic and this is also how it's documented in the wiki. However, if I do

import xcsf
import numpy as np
X, y = (np.random.random((300, 2)), np.random.random(300,))
X_test, y_test = (np.random.random((10, 2)), np.random.random(10,))
xcs = xcsf.XCS(x_dim = 2, max_trials = 1000).fit(X, y)
a = xcs.predict(X_test)
b = xcs.predict(X_test)
c = xcs.predict(X_test)
print(np.all(a == b))
print(a - b)
print(a - c)
print(b - c)

the output is

False
[[ 1.11022302e-16]
 [ 0.00000000e+00]
 [ 0.00000000e+00]
 [ 0.00000000e+00]
 [ 1.11022302e-16]
 [ 0.00000000e+00]
 [ 0.00000000e+00]
 [-1.11022302e-16]
 [ 0.00000000e+00]
 [ 0.00000000e+00]]
[[ 1.11022302e-16]
 [ 0.00000000e+00]
 [-1.11022302e-16]
 [ 5.55111512e-17]
 [ 1.11022302e-16]
 [ 0.00000000e+00]
 [ 0.00000000e+00]
 [ 0.00000000e+00]
 [ 0.00000000e+00]
 [ 0.00000000e+00]]
[[ 0.00000000e+00]
 [ 0.00000000e+00]
 [-1.11022302e-16]
 [ 5.55111512e-17]
 [ 0.00000000e+00]
 [ 0.00000000e+00]
 [ 0.00000000e+00]
 [ 1.11022302e-16]
 [ 0.00000000e+00]
 [ 0.00000000e+00]]

This is probably not a problem in most cases but it nevertheless looks rather odd to me.

Module not found error on Windows & Mac

Hi @rpreen,

after downloading and building the files according to your instructions i'm receiving the following error both on Mac OS and Windows 10 64 Bit when running any of the python scripts in the build-directory:

File "...\xcsf\build\example_maze.py", line 31, in import xcsf.xcsf as xcsf, Module Not Found

On Windows my Python version is 3.9.0, CMake version is 3.19.2.

Can you help here? In case you need additional information just hit me up.

Best

Support early stopping or maybe even callbacks

Hi! I think early stopping could be a nice feature. I currently don't have the time to implement it but thought, I'd suggest it anyways 😉

For example, XCS.fit(X, y) could check whether error changes “enough” (error is measured anyways for the CLI output); the user should probably be able to configure a delta (and maybe also a window how often that delta is checked for).

This would be especially helpful since right now there's no way to stop a XCS.fit(X, y) call and those calls can take long (i.e. minutes) without any progress being made.

As an alternative: Support callbacks of some sort (i.e. Python code run after each iteration) from which a stop signal can be given to XCS. I presume that this is more difficult (or impossible?) to implement, though, due to the interactions with the C library. An example for this is the Optuna's callback support where the callback receives a Study object whose stop() method can be called.

Non-standard use of hyphens in variable names

Use of hyphens instead of underscores in string representations of variable names and types is non-standard and complicates Python type checking. All hyphens should be converted to underscores.

`max_index` leads to unexpected bias in prediction array

Whenever there are several best actions with the same state-action value estimates in the prediction array, the user might expect that selecting actions greedily corresponds to choosing one of these action at random uniformly. In particular, this is the case when nothing's learned yet and the state-action value estimates of all the actions in the prediction array are equal.

However, the use of max_index in best action results in that the first of these best action candidates is always selected.

This bias does not only concern the first encounter of a state: Consider the case where in a—so far unseen—state the first action's state-action value estimate is selected due to this bias and then updated and it turns out to be ever so slightly better than the default state-action value. In an ε-greedy setting (especially if using high initial ε with decay, which is not unusual), the system then would have to get into this state quite a few times until it finally selects other actions (one could do the math how often it would have to encounter the state but I guess the point gets clear without it 😉 ).

I'm afraid I don't have the time right now to write a patch/PR but wanted to let you know anyways 🙂

Bug in Python `fit()` when `shuffle=False`

Due to the way the Python wrapper splits the data supplied to fit() into "epochs", if shuffle=False it will repeatedly iterate the first perf_trials samples instead of iterating the whole dataset:

The function needs to pass the starting index within the loop so that subsequent samples can continue to be drawn from the sequence.

// break up the learning into epochs to track metrics
const int n = ceil(xcs.MAX_TRIALS / (double) xcs.PERF_TRIALS);
const int n_trials = std::min(xcs.MAX_TRIALS, xcs.PERF_TRIALS);
for (int i = 0; i < n; ++i) {
const double train =
xcs_supervised_fit(&xcs, train_data, NULL, shuffle, n_trials);
double val = 0;
if (val_data != NULL) {
val = xcs_supervised_score(&xcs, val_data, xcs.cover);
}
update_metrics(train, val, n_trials);
if (verbose) {
print_status();
}
if (callbacks_run(calls)) {
break;
}
}

xcsf/xcsf/xcs_supervised.c

Lines 97 to 128 in 37aa409

double
xcs_supervised_fit(struct XCSF *xcsf, const struct Input *train_data,
const struct Input *test_data, const bool shuffle,
const int trials)
{
double err = 0; // training error: total over all trials
double werr = 0; // training error: windowed total
double wterr = 0; // testing error: windowed total
for (int cnt = 0; cnt < trials; ++cnt) {
// training sample
int row = xcs_supervised_sample(train_data, cnt, shuffle);
const double *x = &train_data->x[row * train_data->x_dim];
const double *y = &train_data->y[row * train_data->y_dim];
param_set_explore(xcsf, true);
xcs_supervised_trial(xcsf, x, y, NULL);
const double error = (xcsf->loss_ptr)(xcsf, xcsf->pa, y);
werr += error;
err += error;
xcsf->error += (error - xcsf->error) * xcsf->BETA;
// test sample
if (test_data != NULL) {
row = xcs_supervised_sample(test_data, cnt, shuffle);
x = &test_data->x[row * test_data->x_dim];
y = &test_data->y[row * test_data->y_dim];
param_set_explore(xcsf, false);
xcs_supervised_trial(xcsf, x, y, NULL);
wterr += (xcsf->loss_ptr)(xcsf, xcsf->pa, y);
}
perf_print(xcsf, &werr, &wterr, cnt);
}
return err / trials;
}

static int
xcs_supervised_sample(const struct Input *data, const int cnt,
const bool shuffle)
{
if (shuffle) {
return rand_uniform_int(0, data->n_samples);
}
return cnt % data->n_samples;
}

Add support for pickling XCSF

This should be possible by:

Saving:

  1. calling the existing xcsf_save() function to write a temporary binary file
  2. read the temporary binary file into bytes
  3. delete the temporary binary file
  4. return the read bytes to pickle

Loading:

  1. write the pickled bytes to a temporary binary file
  2. create a new XCSF instance
  3. load XCSF from the temporary file with xcsf_load()
  4. delete the temporary binary file
  5. return the loaded XCSF

Segmentation fault if `y` has shape `(N,)` for some `N`

The following code segfaults:

import numpy as np
import xcsf.xcsf as x


X = np.arange(-1, 1, 0.01)[:,np.newaxis]
y = np.arange(-1, 1, 0.01)


xcs = x.XCS(1, 1, True)

xcs.MAX_TRIALS = 1000
xcs.condition("hyperrectangle", {"min" : -1, "max" : 1, "spread-min": 0.1, "eta": 0})
xcs.prediction("rls-linear", {"x0": 1, "rls-scale-factor":1000, "rls-lambda": 1})
xcs.action("integer")

xcs.fit(X, y, True)

y_pred = xcs.predict(y)

However, if I change y to

y = np.arange(-1, 1, 0.01)[:,np.newaxis]

or the equivalent

y = np.arange(-1, 1, 0.01).reshape(-1, 1)

it works.

I'm not sure whether fixing this should be high priority (probably not) or whether adding documentation for this behaviour may suffice.

Adding multiple crossover operators

Currently only uniform crossover is implemented for ternary, hyperrectangle, and hyperellipsoid conditions; and only sub-tree crossover for GP-trees. Other operators such as one-point and two-point crossover could be added and made optional at runtime.

Extending hyperrectangle and hyperellipsoid conditions

Hyperrectangles and hyperellipsoids currently only use the centre-spread representation (csr). This could be extended to allow switching between csr and ordered bound (obr) and unordered bound (ubr) representations.

Hyperrectangles and hyperellipsoids are currently axis-parallel. This could be extended to enable rotation.

Freezing, exiting without error message

I'm new to your lib, and I'm running into an unexpected problem. When I call fit, the code freezes. I left it running with CPU at 100% for several hours and nothing had happened, not even a print to the command line or a log message, so I killed the process. I lowered both the population size and perf_trials down to 1 to see if maybe it was just slow reporting due to my config. Now, it pauses for about 30 seconds, and then exits abruptly without any message whatsoever, not even a traceback.

I'm a little stumped as to where to even start debugging this, so I'd appreciate recommendations. I've gotten it to run on a smaller data set w/o issue, and it doesn't appear to be running out of memory. The data set I'm working with is classification -- 41 columns for x and 33 classes encoded with one-hot for y, with 671,088 rows. Not small but not unreasonably large either. Could that be the source of my problem? Otherwise, I'm sure it's blatant user error of some sort.

Offending code snippet:

model = xcsf.XCS(
    x_dim=x.shape[-1],
    y_dim=y.shape[-1],
    n_actions=1,
    random_state=seed,
    max_trials=epochs * len(x_train),
    perf_trials=1,#len(x_train),
    pop_size=1,#5000,
    loss_func="log",
    e0=0.01,
    alpha=1,
    nu=5,
    beta=0.05,
    delta=0.1,
    theta_sub=400,
    theta_del=200,
    stateful=False,
    ea={
        "select_type": "roulette",
        "theta_ea": 200,
        "lambda": 2,
        "p_crossover": 0.8,
        "err_reduc": 1,
        "fit_reduc": 0.1,
        "subsumption": False,
        "pred_reset": False,
    },
    action={
        "type": "integer",
    },
    condition={
        "type": "tree_gp",
        "args": {
            "min_constant": 0,
            "max_constant": 1,
            "n_constants": 100,
            "init_depth": 5,
            "max_len": 10000,
        },
    },
)
callback = xcsf.EarlyStoppingCallback(
    monitor="val",
    patience=20000,
    restore_best=True,
    min_delta=0,
    start_from=0,
    verbose=True
)
print(model.json())  # This prints just before it freezes
model.fit(x_train, y_train, validation_data=validation_data, callbacks=[callback], verbose=True)
print("DONE")  # Never prints

add warm_start parameter to Python fit() and default False

Currently, repeated calls to Python fit() continue to update the population set, which is effectively warm_start=True. To exhibit behaviour more familiar with sklearn the default behaviour should be to reset the population set, i.e., warm_start=False

This is a major behavioural change and will mean updating all of the examples to make sure they add the warm_start=True.

Initialization of center-spread rectangular conditions may be suboptimal

I noticed that center-spread conditions seem to be generally larger (and often protrude a lot out of the input space defined by the min and max condition attributes) than unordered-bound conditions.

In this line, the maximum spread of center-spread rectangular conditions is set to

const double spread_max = fabs(xcsf->cond->max - xcsf->cond->min);

However, shouldn't this value be divided by two? The way that spread is used in cond_rectangle_dist, it defines lower and upper bounds by

lower = center - spread
upper = center + spread

Computing prediction confidence

Currently only a moving average of the error (i.e., the mean prediction deviation) is tracked. Adding a mechanism to compute the confidence of predictions would be useful.

Semantics of rectangle-based conditions differ

Center-spread conditions define fully open intervals $(c - s, c + s)$ since there's a distance < 1 comparison in match and distance < 1 if distance < s and distance == 1 if distance == s.

On the other hand, unordered-bound conditions define closed intervals $[l, u]$ since if x[i] < l || x[i] > u the condition does not match.

This should probably be documented or fixed?

I personally will probably not use ubr right now; but I noticed this and wanted to create an issue anyway, just in case.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.