Git Product home page Git Product logo

configs's Introduction

nf-core/configs

Lint Status

A repository for hosting Nextflow configuration files containing custom parameters required to run nf-core pipelines at different Institutions.

Table of contents

Using an existing config

The Nextflow -c parameter can be used with nf-core pipelines in order to load custom config files that you have available locally. However, if you or other people within your organisation are likely to be running nf-core pipelines regularly it may be a good idea to use/create a custom config file that defines some generic settings unique to the computing environment within your organisation.

Configuration and parameters

The config files hosted in this repository define a set of parameters which are specific to compute environments at different Institutions but generic enough to be used with all nf-core pipelines.

All nf-core pipelines inherit the functionality provided by Nextflow, and as such custom config files can contain parameters/definitions that are available to both. For example, if you have the ability to use Singularity on your HPC you can add and customize the Nextflow singularity scope in your config file. Similarly, you can define a Nextflow executor depending on the job submission process available on your cluster. In contrast, the params section in your custom config file will typically define parameters that are specific to nf-core pipelines.

You should be able to get a good idea as to how other people are customising the execution of their nf-core pipelines by looking at some of the config files in nf-core/configs.

Offline usage

To use nf-core pipelines offline, we recommend using the nf-core download helper tool. This will download both the pipeline files and also the config profiles from nf-core/configs. The pipeline files are then edited to load the configs from their relative file path correctly.

# Download the workflow + transfer to offline cluster
nf-core download rnaseq
scp nf-core-rnaseq-3.0.tar.gz [email protected]:/path/to/workflows   # or however you prefer to transfer files to your offline cluster
# Connect to offline cluster
ssh [email protected]
# Extract workflow files
cd /path/to/workflows
tar -xzf nf-core-rnaseq-3.0.tar.gz
# Run workflow
cd /path/to/data
nextflow run /path/to/workflows/nf-core-rnaseq-3.0/workflow -profile mycluster

If required, you can instead download the nf-core/configs files yourself and customise the --custom_config_base / params.custom_config_base parameter in each pipeline to to set to the location of the configs directory.

Adding a new config

If you decide to upload your custom config file to nf-core/configs then this will ensure that your custom config file will be automatically downloaded, and available at run-time to all nf-core pipelines, and to everyone within your organisation. You will simply have to specify -profile <config_name> in the command used to run the pipeline. See nf-core/configs for examples.

Before adding your config file to nf-core/configs, we highly recommend writing and testing your own custom config file (as described above), and then continuing with the next steps.

N.B. In your config file, please also make sure to add an extra params section with params.config_profile_description, params.config_profile_contact and params.config_profile_url set to reasonable values. Users will get information on who wrote the configuration profile then when executing a nf-core pipeline and can report back if there are things missing for example.

N.B. If you try to specify a shell environment variable within your profile, in some cases you may get an error during testing of something like Unknown config attribute env.USER_SCRATCH -- check config file: /home/runner/work/configs/configs/nextflow.config (where the bash environment variable is $USER_SCRATCH). This is because the github runner will not have your institutional environment variables set. To fix this you can define this as an internal variable, and set a fallback value for that variable. A good example is in the VSC_UGENT profile.

Testing

If you want to add a new custom config file to nf-core/configs please test that your pipeline of choice runs as expected by using the -c parameter.

## Example command for nf-core/rnaseq
nextflow run nf-core/rnaseq --reads '*_R{1,2}.fastq.gz' --genome GRCh37 -c '/path/to/custom.config'

Documentation

You will have to create a Markdown document outlining the details required to use the custom config file within your organisation. You might orientate yourself using the Template that we provide and filling out the information for your cluster there.

See nf-core/configs/docs for examples.

Currently documentation is available for the following systems:

Uploading to nf-core/configs

Fork the nf-core/configs repository to your own GitHub account. Within the local clone of your fork:

  • add the custom config file to the conf/ directory
  • add the documentation file to the docs/ directory
  • edit and add your custom profile to the nfcore_custom.config file in the top-level directory of the clone
  • edit and add your custom profile to the README.md file in the top-level directory of the clone

In order to ensure that the config file is tested automatically with GitHub Actions please add your profile name to the profile: scope (under strategy matrix) in .github/workflows/main.yml. If you forget to do this the tests will fail with the error:

Run python ${GITHUB_WORKSPACE}/bin/cchecker.py ${GITHUB_WORKSPACE}/nfcore_custom.config ${GITHUB_WORKSPACE}/.github/workflows/main.yml
Tests don't seem to test these profiles properly. Please check whether you added the profile to the Github Actions testing YAML.
set(['<profile_name>'])
##[error]Process completed with exit code 1.

Commit and push these changes to your local clone on GitHub, and then create a pull request on the nf-core/configs GitHub repo with the appropriate information.

Please request review from @nf-core/maintainers and/or on #request-review on the nf-core slack, and providing that everything adheres to nf-core guidelines we will endeavour to approve your pull request as soon as possible.

Adding a new pipeline-specific config

Sometimes it may be desirable to have configuration options for an institute that are specific to a single nf-core pipeline. Such options should not be added to the main institutional config, as this will be applied to all pipelines. Instead, we can create a pipeline-specific institutional config file.

The following steps are similar to the instructions for standard institutional config, however using pipeline variants of folders e.g., conf/pipeline/ or under pipeline/

⚠️ Remember to replace the <PIPELINE> and <PROFILE> placeholders with the pipeline name and profile name in the following examples

Institutional configs work because the pipeline nextflow.config file loads the nf-core/configs/nfcore_custom.config config file, which in turn loads the institutional configuration file based on the profile <PROFILE> supplied on the command line.

To add in pipeline-specific institutional configs, we add a second includeConfig call in the pipeline nextflow.config file, which loads the pipeline/<PIPELINE>.config file from the nf-core/configs repo. This file has <PIPELINE> specific institution configuration again with different profiles <PROFILE>.

The pipeline nextflow.config file should first load the generic institutional configuration file and then the pipeline-specific institutional configuration file. Each configuration file will add new params and overwrite the params already existing.

Note that pipeline-specific configs are not required and should only be added if needed.

Pipeline-specific institutional documentation

Currently documentation is available for the following pipelines within specific profiles:

Pipeline-specific documentation

Currently documentation is available for the following pipeline:

Enabling pipeline-specific configs within a pipeline

⚠️ This has to be done on a fork of the nf-core/<PIPELINE> repository.

Fork the nf-core/<PIPELINE> repository to your own GitHub account. Within the local clone of your fork, if not already present, add the following to nextflow.config after the code that loads the generic nf-core/configs config file:

// Load nf-core/<PIPELINE> custom profiles from different Institutions
try {
  includeConfig "${params.custom_config_base}/pipeline/<PIPELINE>.config"
} catch (Exception e) {
  System.err.println("WARNING: Could not load nf-core/config/<PIPELINE> profiles: ${params.custom_config_base}/pipeline/<PIPELINE>.config")
}

Commit and push these changes to your local clone on GitHub, and then create a pull request on the nf-core/<PIPELINE> GitHub repo with the appropriate information.

We will be notified automatically when you have created your pull request, and providing that everything adheres to nf-core guidelines we will endeavour to approve your pull request as soon as possible.

Create the pipeline-specific nf-core/configs files

⚠️ This has to be done on a fork of the nf-core/configs repository.

Fork the nf-core/configs repository to your own GitHub account. And add or edit the following files in the local clone of your fork.

  • pipeline/<PIPELINE>.config

If not already created, create the pipeline/<PIPELINE>.config file, and add your custom profile to the profile scope

profiles {
  <PROFILE> { includeConfig "${params.custom_config_base}/conf/pipeline/<PIPELINE>/<PROFILE>.config" }
}
  • conf/pipeline/<PIPELINE>/<PROFILE>.config

Add the custom configuration file to the conf/pipeline/<PIPELINE>/ directory. Make sure to add an extra params section with params.config_profile_description, params.config_profile_contact to the top of pipeline/<PIPELINE>.config and set to reasonable values. Users will get information on who wrote the pipeline-specific configuration profile then when executing the nf-core pipeline and can report back if there are things missing for example.

  • docs/pipeline/<PIPELINE>/<PROFILE>.md

Add the documentation file to the docs/pipeline/<PIPELINE>/ directory. You will also need to edit and add your custom profile to the README.md file in the top-level directory of the clone.

  • README.md

Edit this file, and add the new pipeline-specific institutional profile to the list in the section Pipeline specific documentation

Commit and push these changes to your local clone on GitHub, and then create a pull request on the nf-core/configs GitHub repo with the appropriate information. In the pull-request description, add a link to the repository specific pull-request(s) that use this new code. Please request review from @nf-core/maintainers and/or on #request-review on the nf-core slack, and providing that everything adheres to nf-core guidelines we will endeavour to approve your pull request as soon as possible. Both PRs will need to be merged at the approximately the same time.

Help

If you have any questions or issues please send us a message on Slack.

configs's People

Contributors

adrijak avatar apeltzer avatar ashildv avatar brucemoran avatar combiz avatar drpatelh avatar edmundmiller avatar ewels avatar georgiesamaha avatar ggabernet avatar gwennid avatar jfy133 avatar joon-klaps avatar l-modolo avatar lquayle88 avatar marcmtk avatar mashehu avatar maxulysse avatar nf-core-bot avatar nvnieuwk avatar olgabot avatar phue avatar pmoris avatar pontus avatar rogangrant avatar saulpierotti avatar sppearce avatar tclamnidis avatar vsmalladi avatar wassimsalam01 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

configs's Issues

Sarek on Irma breaks with new uppmax.config

When running Sarek on IRMA with updated uppmax.config

Command output:
sbatch: error: Batch job submission failed: Invalid feature specification.

#SBATCH -A ngi2016003 -p node -C mem512GB

This probably creates issues running other nf-core pipelines on Irma as well.

Weird occassional error when running nf-core pipelines with bi profile

Examples are always like this:

Launching `nf-core/scrnaseq` [focused_einstein] - revision: ef3f49479f [dev]
NOTE: Your local project version looks outdated - a different revision is available in the remote repository [c8d04dc40a]
WARNING: Could not load nf-core/config profiles: https://raw.githubusercontent.com/nf-core/configs/master/nfcore_custom.config
No such file: Config file does not exist: https://raw.githubusercontent.com/nf-core/configs/master/conf/conf/igenomes.config

A hint what could go wrong is the second URL, that is of course not correct. I am a bit wondering what happens here and why this is duplicating the conf part of the URL.

@alexblaessle and I got hit by this a couple of times already

Generate CODEOWNERS from university contacts

Tasks

Improve the readme

Make a logo and write some documentation explaining what this is, how it works and guidelines / instructions for how to add to it.

[IMPERIAL CONFIG] Not working anymore

I've been trying to run the smrnaseq pipeline on the HPC. It doesn't work if I try to use the imperial profile. The error I get is relating to job sizing/resource selection. When I looked at the file, it turned out that the job was being submitted larger than the HPC can handle, and with a submission time of 0. The pipeline works fine if I just run it on one large node.

Add additional contact fields

We might also move from:

-    config_profile_contact = 'Denis O\'Meally (@drejom) or HPRCC Helpdesk ([email protected])'
+    config_profile_contact = 'Denis O\'Meally'
+    config_profile_contact_github = '@drejom'
+    config_profile_contact_email = "[email protected]"

Originally posted by @emiller88 in #577 (comment)

Basically some modules have all of the info, some modules don't. It's fine if they don't want to include an email for example. The real problem is they're not in a consistant format.

So we also need to go through and clean up the CODEOWNERS errors.

Show documentation on the website

It would be good to either move or render the documentation in this repo on the main nf-core website somehow.

The stuff on the readme is quite good and general, and the docs for the profiles are very difficult to find.

Maybe a new subsection of the website dedicated for institutional configs?

Improve config documentation / README

The documentation for e.g. adding a new config (and what to put in it) is currently quite dense and sometimes skips over things.

We should do a clean-up pass of the README to make it more accessible, and maybe consider splitting it up into different pages.

BinAC Multiple queue logic

process.queue = { task.memory > 756.GB ? 'supercruncher': task.time <= 2.h ? 'short' : task.time <= 48.h ? 'medium': 'long' }

Add docs for different config profiles

Custom configs are now hosted centrally at nf-core/configs. We need to think about where we host the documentation for running these config files. These are currently added on a per-pipeline basis but it would be good to also host them centrally so they are available to all pipelines.

This will also probably involve refactoring the current docs to deal with this.

Documention to-do list:

  • BINAC (Alex P)
  • CCGA
  • CFC (Alex P)
  • GIS
  • hebbe
  • phoenix (Alex P)
  • uct_hex
  • uppmax_devel
  • uppmax
  • uzh (Alex P)

Remove "rosalind_uge" from repository

Hello there! I am a bioinformatics scientist contractor working out of CDC (U.S.) for a group named SciComp. We manage the scientific computing infrastructure at CDC, including the Rosalind HPC cluster. One of our users, Gregory Sprenger, submitted the "rosalind_uge" config to nf-core a while back. Our group never approved of this, so we request that it be taken off of the repository. We have decided for the time being to maintain our own configuration files that users are instructed to use, so we do not want any configuration specific to CDC compute environments in nf-core.

Config in question: conf/rosalind_uge.config

Thank you for taking time to consider this issue, we appreciate it greatly.

update module use for computerome.md

Hi @marcmtk, would it be possible to update computerome documentation at https://github.com/nf-core/configs/blob/master/docs/computerome.md

Tasks

Thank you!

Add CI testing

It would be good to get Travis to run some basic tests, even if it's only checking that nextflow can load these configs properly.

Will need to set the base URL sensibly to ensure that the correct remote config files are loaded.

how can I contact you for help via slack - 2 different gmail addresses not accepted

hello,
what is the trick to obtain help from you via slack. I tried to join the nf-core on slack under https://nfcore.slack.com/
choose continue with google and ended up with the message:
"The email address must match one of the domains listed below. Please try another email."
I can not the workspace administrator at nf-core for an invitation either.
Please suggest a working way for joining the nf-core on slack.
Many Thanks & looking forward to reading from you.

The slurm executor scope got ignored inside of config profile - bug or feature?

While composing the config file for HPC, I found that the slurm executor scope listed inside of profile got ignored (example 3) , so by default the pollInterval will be 5s, which is very unfriendly for multiple-users cluster, but the setting works in process scope outside of profile (example 2), is it a bug or feature?

e.g.1 worked example nextflow.conf:

executor {
      pollInterval = '2 min'
}
profile1 { 
      ...
}

run with

nextflow run -profile profile1 xxx

.nextflow.log

pollInterval: 2m

e.g. 2 worked example nextflow.conf:

process { 
     pollInterval = '2 min'
      ...
}

run with

nextflow run -c nextflow.conf xxx

.nextflow.log

pollInterval: 2m

e.g. 3 failed example nextflow.conf:

profile2 {
     process { 
           pollInterval = '2 min'
           ...
           }
}

run with

nextflow run -profile profile2 xxx

.nextflow.log

pollInterval: 5s

Multiple queue logic

At the moment, we mostly use a single queue per defined cluster environment. However, it might be that we require more than just a single queue and instead should be able to e.g. define

process A uses short, process B afterward needs much memory mem and process C runs very long, e.g. uses long.

At the moment this is not addressed, but we should be flexible in terms of being able to address this 👍

Make regular releases of this repository

Some users that were using pipelines already using the centralized configs repository had issues after yesterdays functional update in here:

N E X T F L O W  ~  version 19.01.0
Launching `nf-core/rnafusion` [sleepy_goldstine] - revision: b2fb212d31 [dev]
ERROR ~ Unable to parse config file: '/home-link/qeaga01/.nextflow/assets/nf-core/rnafusion/nextflow.config'

 Illegal character in scheme name at index 0: %5B:%5D/conf/cfc.config


-- Check '.nextflow.log' file for details

This affected all pipelines already using the centralized configs, as the ${params.custom_config_base} isn't defined in the previous version of this repository, thus making all changes in this repository propagate to eager, rnaseq, ampliseq, rnafusion, hlatyping, methylseq (?)and potentially others as well.

We should maybe consider having something in nf-core/tools that checks that the pipeline releases are pinning against a specific tag (can be something fixed other than latest), thus making any changes permanent. Thoughts on this?

Fix `nextflow config -show-profiles` command

This code block breaks the functionality of nextflow config -show-profiles with all nf-core pipelines:

configs/conf/bi.config

Lines 10 to 18 in bb7f67e

def determine_global_config() {
if( System.getenv('NXF_GLOBAL_CONFIG') == null)
{
def errorMessage = "ERROR: Environment variable NXF_GLOBAL_CONFIG is missing. Set it to point to global.config file."
System.err.println(errorMessage)
throw new Exception(errorMessage)
}
return System.getenv('NXF_GLOBAL_CONFIG')
}

Gives:

$ nextflow config -a nf-core/rnaseq
ERROR: Environment variable NXF_GLOBAL_CONFIG is missing. Set it to point to global.config file.
WARNING: Could not load nf-core/config profiles: https://raw.githubusercontent.com/nf-core/configs/master/nfcore_custom.config
No such file: Config file does not exist: https://raw.githubusercontent.com/nf-core/configs/master/conf/conf/test.config

I thought that this was a nextflow bug at first, see nextflow-io/nextflow#1639

@apeltzer - what's going on in this code? Can we refactor it somehow?

Add Azure specifc Sarek pipeline config

Currently this is part of the Azure specific config for Sarek, but can be optimized.

azure {
    batch {
        pools {
            auto {
               autoScale = true

               sku = "batch.node.centos 7"

               offer = "centos-container"

               publisher = "microsoft-azure-batch"

               vmType = 'Standard_E64_v3'

               vmCount = 1

            }
        }
    }
}

nf-core pipeline tutorial

Hi,
Do you have any step by step tutorial for processing viral metagenomic samples?
it would be great if you share it with me
Regards

Think about how these will work offline

A point I thought of earlier today is that we need to think about how these configs will work offline.

It could be simply that we update the nf-core download command to also always download this repo. That may be sufficient.

Core Nextflow: auto assign profiles

Originally posted on the Nextflow repo: nextflow-io/nextflow#792 but was auto-closed due to inactivity. I still think it's a cool idea, so reposting here.

Suggestion is to make a PR to Nextflow to add functionality to config profiles, to allow them to be automatically assigned when a certain condition is met. That logic could be anything, but my initial suggestion was to match the system hostname.

In short, the aim is to make Nextflow support the following syntax:

profiles {
  uppmax {
    when = host.hostname.contains('.uppmax.uu.se')
    includeConfig 'conf/uppmax.config'
  }
  system_two {
    when = host.hostname.contains('.system_two.com')
    includeConfig 'conf/system_two.config'
  }
  system_three {
    when = System.getEnv('SOME_VAR') == 'myvalue'
    includeConfig 'conf/system_two.config'
  }
}

Paolo gave some advice back on the original thread. The most pertinent bit is probably the following:

The config object is parsed using the ConfigParser. The first goal is to understand how the when definition is mapped in the config object. The ConfigParserTest can help to make some experiments.

Then once we have the auto profile information it should be enforce in the ConfigBuilder at this point.

Request: Lund University Profile `aurora` HPC

nf-core/scrnaseq feature request

Hi nf-core/scrnaseq developer!

I got interested in your very promising package and am trying to set it up on aurora - our HPC system in Lund.

This system (aurora) uses /projects as mount point for the storage. When using singularity container to store and run the analysis software I can not access my data if I do not "mount" this path inside the container. To do so I would need a /projects path in the singularity container and this path would also need to be actively filled (-B /projects:/projects) in the singularity startup.

I think this is a very simple fix - just I am not familiar with nextflow. Is there a configuration that enables me to do this or do you need to change the singularity image in order to become compatible with the Lund University HPC system?

Thank you for your time and help!

imperial_mb.config

Hey,

Not sure if you are aware but the large med-bio queue (as used in the imperial_mb.config) has been deprecated but annoyingly you can still submit jobs to it and they will just stay queued.

Alan.

Imperial config: There is a unit for time missing in withLabel:process_low function

Some of my processes were set to walltime=00:00:00. Solution below:

original:

withLabel:process_low {
                queue  = 'v1_throughput72'
                cpus   = { 2	 * task.attempt }
                memory = { 48.GB * task.attempt }
                time   = { 8 * task.attempt  }
}

change it to:

withLabel:process_low {
                queue  = 'v1_throughput72'
                cpus   = { 2	 * task.attempt }
                memory = { 48.GB * task.attempt }
                time   = { 8.h * task.attempt  }
}

Partial CI tests

Currently we always test all configs with every PR.

We should do something similar to module and only check for changes in nfcore_custom.config.

Add a step-by-step guide how to build a institutional profile

The current documentation is quite sparse on what sort of things you need to consider when trying to make an institutional profile.

I would suggest we make a 'step-by-step how-to' guide on how to do so, with extra 'best practises' and 'tips and 'tricks' for different set up environments.

This would help with lowering the bar for new users to get nextflow/nf-core adopted in their institute.

MPCDF Nodes (Raven) memory logic needs to be improved for shared nodes

I was running nf-core/mag with -profile mpcdf,raven,test_full and was getting failed to execute to grid schedular.

Manually submitting the failed job resulted in

sbatch: error: job memory limit for shared nodes exceeded. Must be <= 120000 MB
sbatch: error: Batch job submission failed: Invalid feature specification

We need to play with settings to correctly set this.

azurebatch config doesn't expose tokenDuration parameter, causing sadness

I recently discovered the existence of the azure.storage.tokenDuration parameter in the wrong way -- that is, by having a large and expensive workflow fail spectacularly 2/3 of way through compute due to token expiration. 🙃 The nature of this failure appears to be a particular bummer, as it seems to have interfered with cacheing, requiring me to bodge together a fix to prevent needless re-execution of most of the expensive steps.

I'd propose exposing this parameter in the configuration file!

CPUs / Memory scaling factors

On our UPPMAX clusters, nodes have a given scaling factor where a certain amount of memory is allowed per CPU. Previously, if you submitted a job which requests both CPUs and memory and these values didn't match the scaling factor, SLURM would just silently bump up whichever value was needed to make the scaling factor match. However, apparently this behaviour has now changed and now SLURM simply refuses to run these jobs with the error Requested node configuration is not available.

Instead of writing new configuration files for every pipeline for each UPPMAX cluster, I wonder if we can instead just specify the relevant scaling factor (memory per cpu) for each UPPMAX cluster and adjust the job requests before submission to the cluster.

We already run job requirements through the check_max function, so I perhaps we could tie in to this somehow from the institutional configs? Then run a custom function to round up either the cpus or memory as needed.

The main problem I can see with this is that the check_max function takes one resource type at a type. Here, we need to know both the cpus and memory simultaneously. As such I think that this is going to require new modifications to every pipeline 😞

Documentation: UPPMAX nf-core modules

On the Swedish UPPMAX clusters you can now do modules load nf-core-pipelines to get access to pre-downloaded and cached pipelines complete with their singularity containers.

This is basically undocumented currently, it would be great to add mention of it to the configs UPPMAX docs. It is now the recommended way to run nf-core pipelines on UPPMAX (especially offline clusters, eg. Bianca)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.