Git Product home page Git Product logo

cms-bot's People

Contributors

aandvalenzuela avatar aledegano avatar cmsbuild avatar davidlange6 avatar davidlt avatar ericvaandering avatar fabiocos avatar ferencek avatar forthommel avatar fwyzard avatar gartung avatar gudrutis avatar iahmad-khan avatar iarspider avatar jpata avatar kpedro88 avatar ktf avatar lgray avatar makortel avatar missirol avatar mmusich avatar mrodozov avatar nclopezo avatar perrotta avatar qliphy avatar silviodonato avatar slava77 avatar slava77devel avatar smuzaffar avatar vlimant avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

cms-bot's Issues

Persistent test settings

A common source of churn in PR tests occurs when someone misses special settings that were used in a previous test, and then the tests must be restarted/repeated.

It would be useful to have some way to tell the bot that a certain workflow, external, etc. should always be used to test the PR. That way, if someone just writes "please test", the special settings will automatically be applied. The list of "permanent" special settings should probably be configured/stored separately from temporary settings that could still be specified in the "please test" command.

add igprof tests to jenkins workflows

The proposal is to instrument with mm (MEM_LIVE, MEM_TOTAL) and pp the following:

  • 136.731 with 100 events; instrument step2 and 3 (the main target for me is step3, but I think that step2 is important for other areas)
  • 21234.0 with 10 events [given that this starts from sim]; instrument step 1, 2 and 3
    In each case a run in single thread is OK and is practical (given igprof MT [dis]functionality)

I think that for MEM_LIVE we should get reports at event end process.igprof.reportToFileAtEvent

@kpedro88

Run TimeMemorySummary.py in PR tests

Recent discussions about performance changes pointed to a usefulness of simple time and memory evaluation even in PR-like tests. Of course the timing information cannot be taken as a serious measurement of performances (dependent on machine and its load) but it could help in noticing large changes. The --customise Validation/Performance/TimeMemorySummary.py option is already used in the IB, any specific reason not to activate it also in PR tests? The only drawback I see is that it will make log files definitely longer...

GitHub "squash and merge" not tracked in IB history

I notice that a "squash and merge" GitHub action, as used for cms-sw/cmssw#25386 to get rid of big files in the PR history, is apparently not properly managed as far as the IB history is concerned. Indeed as can be seen in https://cmssdt.cern.ch/SDT/html/cmssdt-ib/#/ib/CMSSW_10_2_X this PR seems not present, while the code is indeed there (checking with a git log or looking at the commit history for the branch.

@smuzaffar is this part managed by cms-bot? I am thinking to the PR history for the release-notes, is this going to be affected as well?

alternative-comparisons timeouts on a large set of changes

This is a follow up to the issue observed in cms-sw/cmssw#23135
The issue is that sometimes the alternative-comparisons are taking too long.

For the latest incident, I wanted to use this issue for tracking of the progress.
@smuzaffar please get a tarball of what was in the comparisons outputs on the node running https://cmssdt.cern.ch/jenkins/job/compare-root-files-short-matrix/26507/console .
More specifically, I'm looking for the contents of validateJR and alternative-comparisons/

My own solution, applied interactively is like in the following and I apply it after it "took too long" running. Notably, the threshold is not the time it takes but the output size, which I vary between 12 and 100 MB depending on importance of the diffs.

find dqm* -name diff.ps -size +25M | cut -d "/" -f1 | while read -r d; do pid=`ps -f -www --forest | grep -A2 ${d}_[0-9]| tail -1 | grep root.exe | awk '{print $2}'`; echo "Too much output in $d: kill $pid"; kill $pid ; done

The pattern matching to the process name will have to be different; also it will need to check for the pdf outputs which I never write/check myself (they appeared useful to others).
Something like this can be applied to resolve this issue as well.
However, perhaps a more straightforward logic of a timeout per individual diff is more appropriate in the jenkins jobs.

@makortel @smuzaffar @fabiocos

false-positives in cms-bot "The following merge commits were also included on top of IB"

It looks like the detection of the last additional merge is failing and reports the last merge from an IB no matter what

One example is cms-sw/cmssw#21708

Tested at: c0ff3a9

The following merge commits were also included on top of IB + this PR after doing git cms-merge-topic:
a4f36f2
  • The commit a4f36f2 is the same as the last commit in CMSSW_10_0_X_2017-12-13-2300, which is used to run the tests. It should not be reported as a commit also included on top of IB.

notify after the IB tests are done if PR merge included more recent changes than the IB

I'm taking cms-sw/cmssw#8653 (comment) as an example:
in a case when "(HEAD+PR merge ) vs HEAD " is not the same as "(IB+ PR merge) vs IB",
the PR integration tests actually test (redundantly) other changes that were merged after the IB used for tests was made.

Could you please add a note in the jenkins build report (the first message from ib-any-integration) posted in the thread that would tell that the tests include effects of more than just this PR code change as would be seen on top of the head.
This will be useful for the L2s checking the results.
Given this info, it's up to the person reviewing the results if a new test should be requested.

RelMon threshold to report a difference

Could you please check the settings of the RelMon running in jenkins.
Why does it not display differences that are actually showing up in the logs.
The specific case in #7565 (comment)
can be used as a reference example.

Add @echabert as watcher of strip packages

Would you please add me (@echabert) as watcher of all the packages listed below ?
I'm co-covenor of the strip local reco & calib group with @mmusich
Thanks in advance

  • CalibTracker/Configuration
  • CalibTracker/Records
  • CalibTracker/SiStrip
  • DataFormats/SiStrip
  • DPGAnalysis/SiStripTools
  • RecoLocalTracker/ClusterParameterEstimator
  • RecoLocalTracker/Configuration
  • RecoLocalTracker/Records
  • RecoLocalTracker/SiStrip
  • RecoLocalTracker/SubCollectionProducers
  • SimTracker/Common
  • SimTracker/Configuration
  • SimTracker/Records
  • SimTracker/SiStripDigitizer

relmon comparisons directory cleanup

This is a follow up to cms-sw/cmssw#22531 (comment)

I checked comparison files avialable in /afs/cern.ch/work/m/muzaffar/public/PR22531.tar.gz

by the count, most files are in the relmon directories

e.g.
./PR22531/25660/1000.0_RunMinBias2011A+RunMinBias2011A+TIER0+SKIMD+HARVESTDfst2+ALCASPLIT
28825 files

Almost all of it is taken up just by the html files that display navigation through all of the directories and in almost in all cases there is no useful information, they just show "success" in the comparisons.

I think that we can save a lot by dropping directories/htmls with only Success in the results for the (standard in cms-bot) setup of showing only differences.
Based on the output format, I think that only html files that match this pattern in the content "Skipped:\|Null:\|Fail:" should be kept. In the example with wf 1000.0 there are only 11 files with "Skipped:\|Null:\|Fail:". The rest can be dropped.

I'm not well familiar with the RelMon options. Perhaps this can be done already at the output creation step. If that's not really possible, then a simple sweeping script can be added to rm all files with just success in the comparisons.

Problem with igprof in IBs

I noticed there is no step3 profile for WF 136.731. The problem appears to be that the file name is too long (e.g. in https://cmssdt.cern.ch/SDT/jenkins-artifacts/igprof/CMSSW_11_0_X_2019-06-10-1100/slc7_amd64_gcc700/pp/136.731_RunSinglePh2016B+RunSinglePh2016B+HLTDR2_2016+RECODR2_2016reHLT_skimSinglePh_HIPM+HARVESTDR2/step3_RunSinglePh2016B+RunSinglePh2016B+HLTDR2_2016+RECODR2_2016reHLT_skimSinglePh_HIPM+HARVESTDR2.log):

sh: step3___RAW2DIGI,L1Reco,RECO,SKIM:SinglePhotonJetPlusHOFilter+EXOMONOPOLE,EI,PAT,ALCA:SiStripCalZeroBias+SiStripCalMinBias+TkAlMinBias+EcalESAlign,DQM:@standardDQM+@ExtraHLT+@miniAODDQM___None___auto:run2_data_relval___RECO,MINIAOD,DQMIO___performance___100_EndOfJob.gz: File name too long

Add me as watcher of alca category.

Could you please add me to the watchers of the packages in the alca category?

Does this clash with the list of packages I am anyway watching, i.e. will I get duplicated emails if I watch a package already in the alca category?

-Gregor

Remove mention of AFS from release build messages

We probably don't need to see the message "CERN AFS installation skipped for slc7_amd64_gcc630 as no CMSSW releases are now deployed on AFS." every time we build a release.

The bot workflow could probably be simplified somewhat with this message removed, but I am not 100% sure of the details of where to remove everything related to it.

Test whether a PR modifies the same files as other open PRs in the same branch

With the recent massive clang-format migration we have experienced multiple cases of PRs touching the same files that give rise to conflicts when one of them is merged. With such a large set of PRs and affected files it is difficult to easily keep track of all the possible conflicts.

In order to make this check easier, I have prepared a simple script based on the available PR monitoring snapshot https://raw.githubusercontent.com/cms-sw/cms-prs/master/cms-sw/cmssw/.other/files_changed_by_prs.json , so as to cross check whether a PR modifies the same files as others in a given branch, or optionally make this test for all the PRs of a branch:

#!/usr/bin/env python

from __future__ import print_function
import json
import requests

def ascii_encode_dict(data):
    ascii_encode = lambda x: x.encode('ascii') if isinstance(x, unicode) else x 
    return dict(map(ascii_encode, pair) for pair in data.items())

def build_open_file_list(prs_dict,branch):

    open_file_list = {}
    for pr in prs_dict: 
        if prs_dict[pr]['base_branch'] == branch:
            for file in prs_dict[pr]['changed_files_names']:  
                if open_file_list.has_key(file):              
                    open_file_list[file].append(pr)           
                else:                                         
                    open_file_list[file] = [pr,]              

    return open_file_list

def check_pr_dict(prs_dict,prs_list,pr_number):

    for my_file in prs_dict[pr_number]['changed_files_names']:
        if len(prs_list[my_file]) > 1:
            print("File ",my_file," modified in PRs # ",prs_list[my_file])


#################################################################################################    
        
if __name__ == '__main__':
    
    import sys
    
    if len(sys.argv) < 3:
        print("Usage: SearchPROverlap.py <PR number> <branch>") 
        print("       <PR number>: number of PR belonging to <branch>, or \"all\" for loop on all open PRs in <branch>")
        exit()

    my_pr = sys.argv[1]
    my_branch = sys.argv[2]
    print("Testing PR # ",my_pr," for branch ",my_branch)

    json_response= requests.get("https://raw.githubusercontent.com/cms-sw/cms-prs/master/cms-sw/cmssw/.other/files_changed_by_prs.json")
    #    print(type(json_response.text))
    prs_dict = json.loads(json_response.text, object_hook=ascii_encode_dict)

    my_list = build_open_file_list(prs_dict,my_branch)

    if my_pr == "all":
        for pr in prs_dict: 
            if prs_dict[pr]['base_branch'] == my_branch:
                check_pr_dict(prs_dict,my_list,pr)
    else:
        if prs_dict[my_pr]['base_branch'] != my_branch:
            print("PR # ",my_pr," not belonging to branch ",my_branch)
            exit()
        else:
            check_pr_dict(prs_dict,my_list,my_pr)

It would be useful to implement the functionality of this script as a command of the bot, so as to allow reviewers to test on demand for possible conflicts with other PRs of the same branch

add option for higher-stat relval tests

Request from (cms-sw/cmssw#21701 (comment)):

did you run any larger-scale samples for physics validation? It's hard to tell from the PR test plots (just some fluctuations)

Since your request is, of course, valid and has been raised few other times for other PRs, may I suggest to add a special trigger, like please histat_test to the bot, so that those kinds of checks can be done centrally on some handful samples, rather than relying on the good-will and machine availability of people submitting the PR?

This seems like a valid request to automate in-depth physics validation when needed for PRs. Most PRs won't need this.

One potential issue is that the high-stat samples would also be needed in the IB baselines. We could run them for all IB baselines, or just run them on-demand (somehow keeping track when they're already available so they don't get rerun unnecessarily).

We might want to limit this option to request high stats for a specific workflow, since most workflows may not need physics validation in any given case.

DQM monitor not showing memory reduction

During the scrutiny of cms-sw/cmssw#22955 I have noticed that the bot report was showing no memory variation, despited a number of tests being deleted. A manual run of dqmMemoryStats.py (dqmMemoryStats.py -i 10824.0_TTbar_13+TTbar_13TeV_TuneCUETP8M1_2018_GenSimFull+DigiFull_2018+RecoFull_2018+ALCAFull_2018+HARVESTFull_2018//DQM_V0001_R000000001__Global__CMSSW_X_Y_Z__RECO.root -x --summary -u KiB) was on the contrary showin a reduction of some 25 KiB in the DQM memory budget induced by this PR:

diff -b new.log ref.log

New 11530.10 KiB HLT/BPH
Old 11555.89 KiB HLT/BPH

New Total bytes: 381377.36 KiB
Old Total bytes: 381403.15 KiB

This would seem a "feature" of https://github.com/cms-sw/cms-bot/blob/master/logRootQA.py .

The direct application of the command used by the bot ( https://github.com/cms-sw/cms-bot/blob/master/logRootQA.py#L165 ) gives the right answer:

dqmMemoryStats.py -x -u KiB -p3 -c0 -d2 --summary -r ../../../CMSSW_10_2_X_2018-04-25-1100/work/10824.0_TTbar_13+TTbar_13TeV_TuneCUETP8M1_2018_GenSimFull+DigiFull_2018+RecoFull_2018+ALCAFull_2018+HARVESTFull_2018/DQM_V0001_R000000001__Global__CMSSW_X_Y_Z__RECO.root -i 10824.0_TTbar_13+TTbar_13TeV_TuneCUETP8M1_2018_GenSimFull+DigiFull_2018+RecoFull_2018+ALCAFull_2018+HARVESTFull_2018/DQM_V0001_R000000001__Global__CMSSW_X_Y_Z__RECO.root


************************* DQM level 2 folder breakdown *************************


-25.787 KiB HLT/BPH
Total bytes: -25.787 KiB

Apply categories_map to cms-data

The repositories in cms-data correspond to packages in cmssw. The same L2s from categories.py and category_map.py should be automatically assigned to review cms-data PRs.

Use another PR in baseline for comparison tests

During the development of conditions, AlCa finds it useful to separate PRs that update the global tags in autoCond into logically separate pull requests. In particular, coupled code+conditions updates should not introduce additional, unrelated condition changes. This helps to cross-check that no changes were introduced into workflows other than those intended. For example:

The comparison tests currently make comparisons against the most recent IB but global tags are created and approved outside the CMSSW release framework. So, for example, the GTs in cms-sw/cmssw#27644 are final and all future global tags will include these updates. Where there are conflicts between the global tags in open PRs, global tags in the PRs that are intended to be merged later (to maintain consistency with the global tag versioning) are constructed relative to the global tags in autoCond (rather than relative to the global tag queues) purely for the purpose of supporting the comparison tests.

It would be better if we could use the correct global tag, based on the content of the global tag queue, in the PR from the beginning. To support the PR tests in this case, a new syntax would need to be introduced to the bot to use additional PRs in the baseline for the comparison tests. For example:

[@cmsbuild,] please test baseline <#PR[,#PR[...]]>

Then cms-sw/cmssw#27698 would be tested with

please test baseline 27644

and cms-sw/cmssw#27644 would be tested with

please test baseline 27698,27651

where cms-sw/cmssw#27698 would also include the updated global tag content of cms-sw/cmssw#27644.

Add watchers

@abbiendi @trocino and Carlo Battilana (github ID?) would like to watch the following package. @Degano can you take care of it?

Alignment/MuonAlignment
Alignment/MuonAlignmentAlgorithms
DataFormats/CSCRecHit
DataFormats/GEMRecHit
DataFormats/DTRecHit
DataFormats/MuonDetId
DataFormats/MuonReco
DataFormats/MuonSeed
DataFormats/RPCRecHit
DQMOffline/Muon
DQMOffline/Trigger
HLTrigger/Muon
HLTriggerOffline/Muon
MuonAnalysis/MuonAssociators
RecoMuon/Configuration
RecoMuon/CosmicMuonProducer
RecoMuon/DetLayers
RecoMuon/GlobalMuonProducer
RecoMuon/GlobalTrackFinder
RecoMuon/GlobalTrackingTools
RecoMuon/L2MuonIsolationProducer
RecoMuon/L2MuonProducer
RecoMuon/L2MuonSeedGenerator
RecoMuon/L3MuonIsolationProducer
RecoMuon/L3MuonProducer
RecoMuon/L3TrackFinder
RecoMuon/MeasurementDet
RecoMuon/MuonIdentification
RecoMuon/MuonIsolation
RecoMuon/MuonIsolationProducers
RecoMuon/MuonSeedGenerator
RecoMuon/Navigation
RecoMuon/Records
RecoMuon/StandAloneMuonProducer
RecoMuon/StandAloneTrackFinder
RecoMuon/TrackerSeedGenerator
RecoMuon/TrackingTools
RecoMuon/TransientTrackingRecHit
SimDataFormats/DigiSimLinks
SimDataFormats/RPCDigiSimLink
SimDataFormats/TrackerDigiSimLink
SimDataFormats/TrackingAnalysis
SimGeneral/TrackingAnalysis
SimMuon/Configuration
SimMuon/MCTruth
TrackingTools/KalmanUpdators
TrackingTools/PatternTools
TrackingTools/TrackAssociator
TrackingTools/TrackFitters
TrackingTools/TrackRefitter
TrackingTools/TrajectoryState
TrackingTools/TransientTrack
TrackingTools/TransientTrackingRecHit
Validation/MuonIdentification
Validation/MuonIsolation
Validation/RecoMuon

Watch Particle Flow (2)

For some reason I am not able to watch PF developments and PRs any more .
For example I didnt receive mail for PR #3144 while I was expecting Jan to do the pull request
What happened?
I need to watch everyhting in RecoParticleFlow
M

BTV-dedicated github labels

Dear experts,

I am considering if it would be beneficial for our (BTV) internal bookkeeping of the PRs to have a dedicated github label in cmssw. The idea would be that any of our developers, at the moment of creating a PR, would trigger the CMS bot to add such label and, in case for some reason we would not be notified, the conveners and RECO contacts of the group would be added in the watchers list.

Is such thing possible?

Thanks

Mauro

Modules taking long time during test of cmsdist with big rebuilds

In the test of cms-sw/cmsdist#5091 we have noticed some tests taking long time in some modules, this can be seen from the message issued now by

https://github.com/cms-sw/cmssw/blob/master/Validation/Performance/python/TimeMemoryJobReport.py#L12

This has been already observed in a ROOT rebuild test, and is not seen in the corresponding baseline IBs (although we do not have a systematic monitoring of it yet).
@slava77 suggests this could be related to how large rebuilds are done. @smuzaffar @mrodozov any idea with respect to this?

Check of merge commits in PR

cms-bot is checking and listing the merge commits appearing in a test of a PR. The use of cms-merge-topic in the preparation of PRs is clearly discouraged in our documentation https://cms-sw.github.io/tutorial-resolve-conflicts.html , but in practice merge commits are often found in the PRs, both as merge of branches (e.g. master to update the PR), or even using cms-merge-topic. I guess that developers find them as simpler to manage than rebasing.
The use of merge is not necessarily jeopardizing the history, and is often accepted by the reviewers, see as a relevant recent example cms-sw/cmssw#22594 (comment) . Anyway merging is potentially dangerous, and allowing different kind of merge commits may help in missing the dangerous ones, see for instance a recent example in #26201 (luckily with limited consequences).

I see two drawbacks with the bot procedure to check merge commits:

  • the merge is done on top of the HEAD of the branch, but the search of merge commits looks done staring with the IB tag (according to my understanding of https://github.com/cms-sw/cms-bot/blob/master/run-pr-tests#L245 ). This means that the check could list merges coming from other PRs already merged. This is a reason why I try to accumulate PRs to be merged during the day and merge them altogether close to the IB deadline, so as to minimize the "pollution" of all tests;

  • the test simply searches for any merge commits. But as most of them are in practice not a real problem, and are normally accepted, the test is not providing really useful information to discriminate potentially dangerous additions.

In order to try to isolate the really dangerous merge commits I have tried to implement what I believe could be the check procedure in the following utility:

23:11 cmsdev25 594> cat ~/bin/git-cms-check-duplicate-pr-merges 
#!/bin/bash

#set -o verbose

print_help() {
    echo "" && \
    echo "Check whether in a feature branch to be merged there are PR merges already present in the base branch." && echo && \
    echo "usage:   -b <base branch for comparison> <feature branch to be tested>" && \
    echo "options: -h display this help and exit" && \
    echo ""
}

BASE_BRANCH=""
FEATURE_BRANCH=""

while getopts "bh" OPT;
do
  case ${OPT} in
  b) BASE_BRANCH=${2} ;;
  h) print_help && exit 0 ;;
  \?) exit 1 ;;
  esac
done
shift $(( OPTIND -1 ))

if [ -z "${2}" ]  
    then echo "usage:   -b <base branch for comparison> <feature branch to be tested>" ; exit 1
else
    FEATURE_BRANCH=${2}
fi

DATE=`date +%s`

git checkout ${BASE_BRANCH}
git rev-list --topo-order --pretty=short --merges HEAD  | grep "Merge pull request" | sort > /tmp/AAA-${DATE}
last_commit=`git rev-list --topo-order HEAD | head -n 1`

echo "Testing merges beyond the last commit in the base branch:"
git-show ${last_commit}

CURRENT_BRANCH="tmp/merge-test"

test1=`git rev-parse --quiet --verify ${CURRENT_BRANCH}`
if [ -n "${test1}" ] 
    then echo "Test branch already exists, abort test" ; exit 1
fi

git checkout -b ${CURRENT_BRANCH}
git cms-merge-topic ${FEATURE_BRANCH}
git rev-list --topo-order --pretty=short --merges ${last_commit}..  | grep "Merge pull request" | sort > /tmp/BBB-${DATE}

OUTPUT=`comm -12 /tmp/AAA-${DATE} /tmp/BBB-${DATE}`

if [ -z ${OUTPUT} ]; then
    echo "No duplicated pull request merge"
else
    echo "Pull request merges already in integration branch:"
    comm -12 /tmp/AAA-${DATE} /tmp/BBB-${DATE}
fi

I have started to use it in my private tests so as to exercise it.

It would be good whether people may cross check the idea behind it, so as in case we may update the test in the bot to make it more effective.

@slava77 @perrotta @Dr15Jones @kpedro88 @davidlange6

Make it easy to find failures from comparison summary

When the Comparison Summary reports a failure, it is very difficult to find the plots or other pieces of information necessary to understand the failure. Can we provide a way to quickly jump to the relevant information about a failure?

Create FWLite repo like Stitched

This repo would hold only the sources needed to build FWLite/Fireworks. In addition it would hold the generated CMakeLists.txt files and Serialiazation.cc files from Condformat/*Object.

Missing DQM comparison report

Since a couple of days at least the DQM comparison report is missing in the PR test output, with the message:

Not Found
The requested URL /SDT/@JENKINS_PREFIX@-artifacts/baseLineComparisons/CMSSW_10_6_X_2019-05-07-1100+607ffb/31550/ was not found on this server.

A problem was also previously reported about missing directories in the comparison of some workflows.

report compilation warnings in PR test summary

During a PR test, compilation warnings are visible by browsing the "Compilation" sub-page of the test output summary html, but unless this is directly inspected at any test they go unnoticed until the merge and the IB test.

The https://github.com/cms-sw/cms-bot/blob/master/report-pull-request-results.py script checks and reports only build errors, as detected by the function read_build_log_file at https://github.com/cms-sw/cms-bot/blob/master/report-pull-request-results.py#L237

This function could be updated so as to report also possible warning messages, without breaking in that case the parsing of the file, but just adding to the output message the detection of one or more occurrences of the string "warning:"

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.