Git Product home page Git Product logo

bayesian's Introduction

Website GitHub Linkedin ORCiD Scholar Interests Education Experience CV

bayesian's People

Contributors

actions-user avatar hsbadr avatar paul-buerkner avatar topepo avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

bayesian's Issues

Support for tuning BRMS via tidymodels

Hi - I really like the BRMS package and found the Bayesian package that allows integration with with tidymodels.

I read through the package references as well as the vignette but did not see any mention of tuning a BRMS model via the tune_grid function.

Can models priors be tuned? IS this a feature that will be coming out?

Thanks!

How to pass weights

Hi. In brms I can write this

brm( target1| resp_weights(n)  ~ (1 | edad_cat) + (1 | valor_cliente) + (1 | tipo),
     family = "binomial", data = train)

How can I add resp_weights using bayesian?
Thanks in advance

Improve test coverage

Apparently, we need to add more automated tests to immediately detect and fix incorrect or unexpected behavior before being introduced with any future code changes of feature updates. We aim for getting incrementally closer to 100% coverage.

Revise engine-specific encodings

The options of engine-specific encodings dictate how the predictors should be handled and ensure that the data parsnip passes to brms::brm allows for a model fit that is as similar as possible to what it would have produced directly.

  • predictor_indicators describes whether and how to create indicator/dummy variables from factor predictors.
    • "none": do not expand factor predictors),
    • "traditional": apply the standard model.matrix() encodings, and
    • "one_hot": create the complete set including the baseline level for all factors.
      This encoding only affects cases when fit.model_spec() is used and the underlying model has an x/y interface. We currently use a formula interface.
  • compute_intercept controls whether model.matrix() should include the intercept in its formula.
    This affects more than the inclusion of an intercept column. With an intercept, model.matrix() computes dummy variables for all but one factor levels. Without an intercept, model.matrix() computes a full set of indicators for the first factor variable, but an incomplete set for the remainder.
  • remove_intercept will remove the intercept column after model.matrix() is finished.
    This can be useful if the model function automatically generates an intercept.
  • allow_sparse_x specifies whether the model function can natively accommodate a sparse matrix representation for predictors during fitting and tuning.

@paul-buerkner Would you recommend changing any of the current options for the brms engine?

    parsnip::set_encoding(
      model = "bayesian",
      eng = "brms",
      mode = "regression",
      options = list(
        predictor_indicators = "traditional",
        compute_intercept = TRUE,
        remove_intercept = TRUE,
        allow_sparse_x = FALSE
      )
    )

Numeric response error for classification models

Hi,

I'm looking for clarification about why only the last format in the reprex works. I expect that the first call to fit would not work since {parsnip} requires that the outcome variable is a factor for classification (even though brms() expects a numeric outcome for classification). However, I don't see why the second and third calls to fit() don't work; only the fourth format is successful.

library(bayesian)
library(brms)
library(parsnip)

df1 <- mtcars
df2 <- df1
df2$vs <- factor(df1$vs)

## I expect that this wouldn't work.
bayesian(mode = 'classification') |>
  set_engine('brms') |>
  fit(
    vs ~ mpg + wt + cyl,
    data = df1
  )
#> Error in `check_outcome()`:
#> ! For a classification model, the outcome should be a factor.

## I expect that this would work.
bayesian(mode = 'classification') |>
  set_engine('brms') |>
  fit(
    vs ~ mpg + wt + cyl,
    data = df2
  )
#> Error: Family 'gaussian' requires numeric responses.

## This also seems like it should work?
bayesian() |>
  set_engine('brms', family = bernoulli()) |>
  set_mode('classification') |> 
  fit(
    vs ~ mpg + wt + cyl,
    data = df2
  )
#> Warning: The following arguments cannot be manually modified and were removed:
#> family.
#> Error: Family 'gaussian' requires numeric responses.

## This is the only one that works. Functionally, I don't see why this is different from above.
bayesian(
  mode = 'classification', 
  engine = 'brms', 
  family = bernoulli()
) |>
  fit(
    vs ~ mpg + wt + cyl,
    data = df2
  )
#> Compiling Stan program...
#> ...

Question about functions/aterms in formula

Hi Hamada,

Do functions/aterms in a formula work in this version? I get an error, see reprex below.
I just started playing with this package today so maybe I'm missing something.

Thanks, Eric

library(bayesian)
#> Loading required package: brms
#> Loading required package: Rcpp
#> Loading 'brms' package (version 2.14.4). Useful instructions
#> can be found by typing help('brms'). A more detailed introduction
#> to the package is available through vignette('brms_overview').
#> 
#> Attaching package: 'brms'
#> The following object is masked from 'package:stats':
#> 
#>     ar
#> Loading required package: parsnip

schools_dat <- data.frame(J = seq(1,8,1), 
                    y = c(28,  8, -3,  7, -1,  1, 18, 12),
                    sigma = c(15, 10, 16, 11,  9, 11, 10, 18))

mod <- brm(y|se(sigma) ~  1 + (1 |J), 
           data = schools_dat)
#> Compiling Stan program...
#> Start sampling
#> 
#> SAMPLING FOR MODEL 'aa3fde0a9305834e2e72ea8f987cc227' NOW (CHAIN 1).
#> removed output....
#> Chain 4:  Elapsed Time: 0.049574 seconds (Warm-up)
#> Chain 4:                0.038749 seconds (Sampling)
#> Chain 4:                0.088323 seconds (Total)
#> Chain 4:

tmod <-
  bayesian() %>%
  set_engine("brms") %>%
  fit(
    y|se(sigma) ~  1 + (1 |J),
    data = schools_dat
  )
#> Error in se(sigma): could not find function "se"

predict() does not work with intercept-only models

predict() does not work with intercept-only models.

library(bayesian)
#> Loading required package: brms
#> Loading required package: Rcpp
#> Loading 'brms' package (version 2.20.4). Useful instructions
#> can be found by typing help('brms'). A more detailed introduction
#> to the package is available through vignette('brms_overview').
#> 
#> Attaching package: 'brms'
#> The following object is masked from 'package:stats':
#> 
#>     ar
#> Loading required package: parsnip
library(tibble)

# Create simple data

shovel_data <- tibble(
  bead = factor(c(rep("red", 17), 
                  rep("white", 33)),
                levels = c("white", "red")))

# Fitting the model works fine

fit_obj <- bayesian(mode = "classification", 
         family = bernoulli(link = "logit")) |> 
  fit(bead ~ 1, data = shovel_data)
#> Compiling Stan program...
#> Start sampling
#> 
#> SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 1).
#> Chain 1: 
#> Chain 1: Gradient evaluation took 1.7e-05 seconds
#> Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.17 seconds.
#> Chain 1: Adjust your expectations accordingly!
#> Chain 1: 
#> Chain 1: 
#> Chain 1: Iteration:    1 / 2000 [  0%]  (Warmup)
#> Chain 1: Iteration:  200 / 2000 [ 10%]  (Warmup)
#> Chain 1: Iteration:  400 / 2000 [ 20%]  (Warmup)
#> Chain 1: Iteration:  600 / 2000 [ 30%]  (Warmup)
#> Chain 1: Iteration:  800 / 2000 [ 40%]  (Warmup)
#> Chain 1: Iteration: 1000 / 2000 [ 50%]  (Warmup)
#> Chain 1: Iteration: 1001 / 2000 [ 50%]  (Sampling)
#> Chain 1: Iteration: 1200 / 2000 [ 60%]  (Sampling)
#> Chain 1: Iteration: 1400 / 2000 [ 70%]  (Sampling)
#> Chain 1: Iteration: 1600 / 2000 [ 80%]  (Sampling)
#> Chain 1: Iteration: 1800 / 2000 [ 90%]  (Sampling)
#> Chain 1: Iteration: 2000 / 2000 [100%]  (Sampling)
#> Chain 1: 
#> Chain 1:  Elapsed Time: 0.008 seconds (Warm-up)
#> Chain 1:                0.007 seconds (Sampling)
#> Chain 1:                0.015 seconds (Total)
#> Chain 1: 
#> 
#> SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 2).
#> Chain 2: 
#> Chain 2: Gradient evaluation took 2e-06 seconds
#> Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 0.02 seconds.
#> Chain 2: Adjust your expectations accordingly!
#> Chain 2: 
#> Chain 2: 
#> Chain 2: Iteration:    1 / 2000 [  0%]  (Warmup)
#> Chain 2: Iteration:  200 / 2000 [ 10%]  (Warmup)
#> Chain 2: Iteration:  400 / 2000 [ 20%]  (Warmup)
#> Chain 2: Iteration:  600 / 2000 [ 30%]  (Warmup)
#> Chain 2: Iteration:  800 / 2000 [ 40%]  (Warmup)
#> Chain 2: Iteration: 1000 / 2000 [ 50%]  (Warmup)
#> Chain 2: Iteration: 1001 / 2000 [ 50%]  (Sampling)
#> Chain 2: Iteration: 1200 / 2000 [ 60%]  (Sampling)
#> Chain 2: Iteration: 1400 / 2000 [ 70%]  (Sampling)
#> Chain 2: Iteration: 1600 / 2000 [ 80%]  (Sampling)
#> Chain 2: Iteration: 1800 / 2000 [ 90%]  (Sampling)
#> Chain 2: Iteration: 2000 / 2000 [100%]  (Sampling)
#> Chain 2: 
#> Chain 2:  Elapsed Time: 0.008 seconds (Warm-up)
#> Chain 2:                0.008 seconds (Sampling)
#> Chain 2:                0.016 seconds (Total)
#> Chain 2: 
#> 
#> SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 3).
#> Chain 3: 
#> Chain 3: Gradient evaluation took 2e-06 seconds
#> Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 0.02 seconds.
#> Chain 3: Adjust your expectations accordingly!
#> Chain 3: 
#> Chain 3: 
#> Chain 3: Iteration:    1 / 2000 [  0%]  (Warmup)
#> Chain 3: Iteration:  200 / 2000 [ 10%]  (Warmup)
#> Chain 3: Iteration:  400 / 2000 [ 20%]  (Warmup)
#> Chain 3: Iteration:  600 / 2000 [ 30%]  (Warmup)
#> Chain 3: Iteration:  800 / 2000 [ 40%]  (Warmup)
#> Chain 3: Iteration: 1000 / 2000 [ 50%]  (Warmup)
#> Chain 3: Iteration: 1001 / 2000 [ 50%]  (Sampling)
#> Chain 3: Iteration: 1200 / 2000 [ 60%]  (Sampling)
#> Chain 3: Iteration: 1400 / 2000 [ 70%]  (Sampling)
#> Chain 3: Iteration: 1600 / 2000 [ 80%]  (Sampling)
#> Chain 3: Iteration: 1800 / 2000 [ 90%]  (Sampling)
#> Chain 3: Iteration: 2000 / 2000 [100%]  (Sampling)
#> Chain 3: 
#> Chain 3:  Elapsed Time: 0.008 seconds (Warm-up)
#> Chain 3:                0.007 seconds (Sampling)
#> Chain 3:                0.015 seconds (Total)
#> Chain 3: 
#> 
#> SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 4).
#> Chain 4: 
#> Chain 4: Gradient evaluation took 2e-06 seconds
#> Chain 4: 1000 transitions using 10 leapfrog steps per transition would take 0.02 seconds.
#> Chain 4: Adjust your expectations accordingly!
#> Chain 4: 
#> Chain 4: 
#> Chain 4: Iteration:    1 / 2000 [  0%]  (Warmup)
#> Chain 4: Iteration:  200 / 2000 [ 10%]  (Warmup)
#> Chain 4: Iteration:  400 / 2000 [ 20%]  (Warmup)
#> Chain 4: Iteration:  600 / 2000 [ 30%]  (Warmup)
#> Chain 4: Iteration:  800 / 2000 [ 40%]  (Warmup)
#> Chain 4: Iteration: 1000 / 2000 [ 50%]  (Warmup)
#> Chain 4: Iteration: 1001 / 2000 [ 50%]  (Sampling)
#> Chain 4: Iteration: 1200 / 2000 [ 60%]  (Sampling)
#> Chain 4: Iteration: 1400 / 2000 [ 70%]  (Sampling)
#> Chain 4: Iteration: 1600 / 2000 [ 80%]  (Sampling)
#> Chain 4: Iteration: 1800 / 2000 [ 90%]  (Sampling)
#> Chain 4: Iteration: 2000 / 2000 [100%]  (Sampling)
#> Chain 4: 
#> Chain 4:  Elapsed Time: 0.008 seconds (Warm-up)
#> Chain 4:                0.009 seconds (Sampling)
#> Chain 4:                0.017 seconds (Total)
#> Chain 4:

# predict() does not work. I suspect that this has something to do with the fact
# that this is a model which only uses the intercept.

fit_obj |> 
  predict(new_data = shovel_data[1:5,])
#> Error in results[, 1]: incorrect number of dimensions

Created on 2024-03-13 with reprex v2.1.0

model comparison with k-fold cross validation

Hi,
I just discovered this package, it looks very cool, now I'm trying to do k-fold cross validation with the log-score rule, what I would do with brms with:

k <- loo::kfold_split_random(K = 10, N = nrow(df))
k1 <- kfold(m1, chains = 1, folds = k)
k2 <- kfold(m2, chains = 1, folds = k)
loo_compare(k1, k2)

I guess I should start with

folds <- vfold_cv(houses_train, v = 10)

fit_folds1 <- my_workflow1 |>
  tune::fit_resamples(folds)

fit_folds2 <- my_workflow2 |>
  tune::fit_resamples(folds)

But I'm not sure how I should compare the models. (I can provide a complete reprex, if it's not clear)

Set quantile parameter

I want to use bayesian bindings for brms for quantile regression. I can't figure out how to set quantile as in this code

n <- 200
x <- runif(n = n, min = 0, max = 10)
y <- 1 + 2 * x + rnorm(n = n, mean = 0, sd = 0.6*x)
dat <- data.frame(x, y)
# fit the 20%-quantile
fit <- brm(bf(y ~ x, quantile = 0.2), data = dat, family = asym_laplace())
summary(fit)

Best Way to Parallelize Fitting

I noticed in the rdocumentation page that there are numerous ways to parallelize the fit for this process, including setting the "cores" argument, "threading" argument and setting future = T. I am wondering if there have been any time/functionality tests to understand the "best" way to do this, and also if there are vignettes/examples saved anywhere. I did not see a significant time increase from setting cores equal the number on my local machine.

Error in `bayesian::bayesian_fit()`: ! Unsupported or invalid formula.override

Hey great package. I'm looking to do some training with bayesian but I'm finding it challenging to use the brmsformula() formula specification.

Here's the error I keep getting:

# devtools::install_github("hsbadr/bayesian")
# devtools::install_github("RobinHankin/Brobdingnag") # Prevent conflict between dbplyr and brms

library(tidyverse)
library(bayesian)
library(tidymodels)
library(data.table)

golddata <- fread(
    sep=",", header=TRUE, 
    text="
  Year, Event, Athlete, Medal, Country, Time
1896, 100m Men,        Tom Burke,  GOLD,     USA,  12.00
1900, 100m Men,     Frank Jarvis,  GOLD,     USA,  11.00
1904, 100m Men,      Archie Hahn,  GOLD,     USA,  11.00
1906, 100m Men,      Archie Hahn,  GOLD,     USA,  11.20
1908, 100m Men,    Reggie Walker,  GOLD,     SAF,  10.80
1912, 100m Men,      Ralph Craig,  GOLD,     USA,  10.80
1920, 100m Men,  Charles Paddock,  GOLD,     USA,  10.80
1924, 100m Men,  Harold Abrahams,  GOLD,     GBR,  10.60
1928, 100m Men,   Percy Williams,  GOLD,     CAN,  10.80
1932, 100m Men,      Eddie Tolan,  GOLD,     USA,  10.30
1936, 100m Men,      Jesse Owens,  GOLD,     USA,  10.30
1948, 100m Men, Harrison Dillard,  GOLD,     USA,  10.30
1952, 100m Men,   Lindy Remigino,  GOLD,     USA,  10.40
1956, 100m Men,     Bobby Morrow,  GOLD,     USA,  10.50
1960, 100m Men,       Armin Hary,  GOLD,     GER,  10.20
1964, 100m Men,        Bob Hayes,  GOLD,     USA,  10.00
1968, 100m Men,        Jim Hines,  GOLD,     USA,   9.95
1972, 100m Men,    Valery Borzov,  GOLD,     URS,  10.14
1976, 100m Men,  Hasely Crawford,  GOLD,     TRI,  10.06
1980, 100m Men,      Allan Wells,  GOLD,     GBR,  10.25
1984, 100m Men,       Carl Lewis,  GOLD,     USA,   9.99
1988, 100m Men,       Carl Lewis,  GOLD,     USA,   9.92
1992, 100m Men, Linford Christie,  GOLD,     GBR,   9.96
1996, 100m Men,   Donovan Bailey,  GOLD,     CAN,   9.84
2000, 100m Men,   Maurice Greene,  GOLD,     USA,   9.87
2004, 100m Men,    Justin Gatlin,  GOLD,     USA,   9.85
2008, 100m Men,       Usain Bolt,  GOLD,     JAM,   9.69
2012, 100m Men,       Usain Bolt,  GOLD,     JAM,   9.63
2016, 100m Men,       Usain Bolt,  GOLD,     JAM,   9.81
")

golddata_tbl <- as_tibble(golddata)

golddata_tbl
#> # A tibble: 29 × 6
#>     Year Event    Athlete         Medal Country  Time
#>    <int> <chr>    <chr>           <chr> <chr>   <dbl>
#>  1  1896 100m Men Tom Burke       GOLD  USA      12  
#>  2  1900 100m Men Frank Jarvis    GOLD  USA      11  
#>  3  1904 100m Men Archie Hahn     GOLD  USA      11  
#>  4  1906 100m Men Archie Hahn     GOLD  USA      11.2
#>  5  1908 100m Men Reggie Walker   GOLD  SAF      10.8
#>  6  1912 100m Men Ralph Craig     GOLD  USA      10.8
#>  7  1920 100m Men Charles Paddock GOLD  USA      10.8
#>  8  1924 100m Men Harold Abrahams GOLD  GBR      10.6
#>  9  1928 100m Men Percy Williams  GOLD  CAN      10.8
#> 10  1932 100m Men Eddie Tolan     GOLD  USA      10.3
#> # … with 19 more rows

golddata_tbl %>%
    ggplot(aes(Year, Time)) +
    geom_point()+
    geom_line()

df <- golddata_tbl %>%
    select(Year, Time) 
df
#> # A tibble: 29 × 2
#>     Year  Time
#>    <int> <dbl>
#>  1  1896  12  
#>  2  1900  11  
#>  3  1904  11  
#>  4  1906  11.2
#>  5  1908  10.8
#>  6  1912  10.8
#>  7  1920  10.8
#>  8  1924  10.6
#>  9  1928  10.8
#> 10  1932  10.3
#> # … with 19 more rows

# MODELING ----

model_spec_bayesian <- bayesian(
    mode   = "regression",
    family = gaussian(),
    engine = "brms",
    formula.override = brmsformula(Time ~ Year, nl = TRUE)
)

recipe_spec_bayesian <- recipe(Time ~ Year, df) 

workflow_bayesian <- workflow() %>%
    add_model(
        model_spec_bayesian
    ) %>%
    add_recipe(recipe_spec_bayesian)

workflow_bayesian_fit <- fit(workflow_bayesian, data = df)
#> Error in `bayesian::bayesian_fit()`:
#> ! Unsupported or invalid formula.override!

Created on 2022-04-05 by the reprex package (v2.0.1)

Clarify workflow of specifying and updating arguments

As discussed some time ago, I have finally started looking into the details of the bayesian package. However, I still struggle a bit with the current logic and intended workflow.

For example, I am unsure how to set and update prior and family consistently. Apparently, I can set prior in bayesian() but not the family argument. When using set_engine I can specify both of these arguments. I assume this is intentional but I don't fully understand the logic of it.

What is more, how can I update arguments that are not "main arguments" (such as prior) but, apparently only "engine-specific arguments" (such as family)? Of course I could reset the engine via set_engine but that then removes all the engine-specific arguments.

Some other things are confusing to me as well, but they are likely outside of the control of bayesian. For example, I apparently have to specify the formula either via fit(formula = ...) if the object is a model_spec from parsnip or via add_model(formula = ...) if the object comes from workflows. In particular, in this latter case, fit(formula = ...) would not work.

Interested in collaboration to identify next set of feaures?

Hi @hsbadr. I like what you have hear so far!

It looks like there some missing features in this package, and I'd love to help you build out this package even more. I see you have testing identified as a good first issue. I can help with that!

I have just begun a project to translate Solomon Kurz's companion book to Statistical Rethinking that uses brms.

Interested in collaborating?

Here's the model I'm currently working on (the first one in the book):

 brm(
    w | trials(36) ~ 0 + Intercept, 
    data = list(w = 24),
    family = binomial(link = "identity"),
    prior =  prior(beta(1, 1), class = b, lb = 0, ub = 1),
    seed = 2,
    file = "fits/b02.01"
  )

Here's as close as I got:

dat <- tibble(w = 24, n = 36)
b2.1_rec <- 
   recipe(w ~ 0, 
          data = dat) |> 
  step_intercept()

b2.1_mod <- 
   bayesian(family = binomial(),
            # formula.override =  brms::brmsformula(w | trials(n) ~ 0),
            prior = prior(beta(1, 1), class = b, lb = 0, ub = 1),
            seed = 2) |> 
   set_engine("brms")

b2.1_workflow <- 
  workflow() |> 
  add_recipe(b2.1_rec) |> 
  add_model(b2.1_mod) |> 
  fit(data = dat)

Support multivariate non-linear models

I am building a model using the non-linear interface in brms and it works like a charm with your package. Now I need to extend this to be a multivariate model, i.e., multiple response variables and I cannot seem to get it to work.

Brief description of the problem

library(tidyverse)
library(brms)
library(bayesian)

mydf <- tibble(
        x = sin(seq(1, 10, 0.25)),
        z = 2 * x + rnorm(length(x), 0, 0.5),
        y = 1 * x + rnorm(length(x), 0, 0.5)
)

myrec <- mydf %>%
        recipe() %>%
        update_role(z, new_role = "outcome") %>%
        update_role(y, new_role = "outcome") %>%
        update_role(x, new_role = "predictor") 
        #add_role(has_role("outcome"), new_role = "predictor")

myform <- bf(z ~ b1 * x, b1 ~ 1, nl = TRUE) +
        bf(y ~ b2 * x, b2 ~ 1, nl = TRUE)
myprior <- prior(normal(0, 1), nlpar = "b1", resp = "z") +
	prior(normal(0, 1), nlpar = "b2", resp = "y")
mymodel <- bayesian(family = normal(), cores = 4, chains = 4) %>%
        set_engine("brms") %>%
        set_mode("regression") %>%
        update(formula = z ~ z + y + x) %>%
        update(formula.override = bayesian_formula(myform)) %>%
        update(prior = myprior) %>%
        update(family = list("gaussian", "gaussian"))

myworkflow <- workflow() %>%
        add_recipe(myrec) %>%
        add_model(spec = mymodel)

myworkflow_fit <- myworkflow %>%
        fit(data = mydf)

I get the error message Error in formula.default(object, env = baseenv()) : invalid formula but when just using one response variable it works fine. So is this an unsupported feature or am I doing something wrong?

If you instead set the mymodel to be univariate like this:

# Single response variable works
mymodel <- bayesian(family = normal(), cores = 4, chains = 4) %>%
        set_engine("brms") %>%
        set_mode("regression") %>%
        update(formula.override = bayesian_formula(bf(z ~ b * x, b ~ 1, nl = T))) %>%
        update(prior = prior(normal(0, 1), nlpar = "b")) %>%
        update(family = "gaussian")

Everything works.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.