Git Product home page Git Product logo

openzh / covid_19 Goto Github PK

View Code? Open in Web Editor NEW
424.0 424.0 177.0 2.72 GB

COVID19 case numbers of Cantons of Switzerland and Principality of Liechtenstein (FL). The data is updated at best once a day (times of collection and update may vary). Start with the README.

Home Page: https://www.zh.ch/de/gesundheit/coronavirus/zahlen-fakten-covid-19.zhweb-noredirect.zhweb-cache.html?keywords=covid19&keyword=covid19#/

License: Creative Commons Attribution 4.0 International

Shell 3.78% JavaScript 1.52% Jupyter Notebook 32.34% Python 62.19% Ruby 0.17%

covid_19's People

Contributors

amslera avatar baryluk avatar borisdjakovic avatar calmyournerves avatar claudia013 avatar corinnehuegli avatar davidezollino avatar dkoltg avatar dominikgehl avatar ebeusch avatar fab-benz avatar fabian avatar gaberoo avatar gd-zh avatar gmacauda avatar janetzkoa avatar jb3-2 avatar je1982 avatar judithbouman2412 avatar kalakaru avatar kks-pmt avatar maekke avatar metaodi avatar mmznrstat avatar sarahnadeau avatar simgraworldwide avatar statovsky avatar tlorusso avatar viktoria023 avatar zukunft avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

covid_19's Issues

Update Readme

Currently the readme doesn't reflect that most canton's are scraped by @baryluk 's army of scrapers.

Cases für Solothurn

Gibt es Daten ebenfalls für Solothurn, sonst wären ja alle Kanton da

Also, COVID19_Fallzahlen_Kanton_SO_total.csv

Primitive scraper for SH

echo SH; d=$(curl --silent 'https://sh.ch/CMS/content.jsp?contentid=3209198&language=DE&_=1584807070095' | grep data_post_content | sed -E -e 's/\\n/\n/g'); echo "Scraped at $(date --iso-8601=seconds)"; echo -n "Date and time: "; echo "$d" | grep "Im Kanton Schaffhausen gibt es" | sed -E -e 's/^.*\(([0-9.]+)\).*$/\1/'; echo -n "Confirmed cases: "; echo "$d" | grep "bestätige" | sed -E -e 's/^.*strong>([0-9]+)[^0-9]*$/\1/'
SH
Scraped at 2020-03-21T16:19:46+00:00
Date and time: 20.03.2020
Confirmed cases: 14

This is really hacky, because sh.ch is absolutely abhorrent with amount of JavaScript and content that is loaded dynamically. But it works.

The URL that generates json content, was reverse engineered from looking at network traffic when loading this site: https://sh.ch/CMS/Webseite/Kanton-Schaffhausen/Beh-rde/Verwaltung/Departement-des-Innern/Gesundheitsamt-3209198-DE.html

I have no idea what the _ parameter is, some kind of timestamp I think, but I am not sure. Could be caching, and might not work. To be seen.

Fix UR scraper

It just broken, because website changed format a bit.

It is now in form of a table. Should be easy to fix.

On it.

UR scraper fails

Currently the UR scraper fails:

Run the scraper...
Traceback (most recent call last):
  File "/home/runner/work/covid_19/covid_19/scrapers/parse_scrape_output.py", line 170, in <module>
    print("{:2} {:<16} {:>7} {:>7} OK {}".format(abbr, date, cases, deaths if not deaths is None else "-", scrape_time))
TypeError: unsupported format string passed to NoneType.__format__
Export database to CSV...

cc @baryluk

Scraper for VD

Here is a python scraper for Canton VD.

Hopefully it will do the job ongoing, but it's not perfect as I've had to extract the data provided in text format as I didn't find out how to download the data from datawrapper (javascript).
Also, the data provided there doesn't include data prior to 10/03/2020.

Anyways, hope this helps.

# -*- coding: utf-8 -*-
from selenium import webdriver
from bs4 import BeautifulSoup
import numpy as np
import pandas as pd

### Watch-out: installing Selenium requires Gekko and it may be easier to configure it with Chrome
geckk=r'C:\Program Files (x86)\Mozilla Firefox\firefox.exe'

'''Documentation & resources to help set-up & use selenium:  
    https://www.tutorialspoint.com/python_web_scraping/python_web_scraping_dynamic_websites.htm
    https://www.selenium.dev/documentation/en/webdriver/web_element/
    https://realpython.com/modern-web-automation-with-python-and-selenium/
    https://stackoverflow.com/questions/7861775/python-selenium-accessing-html-source
    https://stackoverflow.com/questions/51273995/selenium-python-dynamic-table
'''

### Set options for Selenium 
options = webdriver.FirefoxOptions()
options.headless = True
options.add_argument("disable-gpu")
options.add_argument("headless")
options.add_argument("no-default-browser-check")
options.add_argument("no-first-run")
options.add_argument("no-sandbox")
options.add_argument("marionette=True")
options.add_argument("--test-type")
options.set_preference('browser.download.manager.showWhenStarting', False)
options.set_preference('browser.helperApps.neverAsk.saveToDisk', 'text/csv')
options.set_preference("browser.download.folderList",2)
options.set_preference("browser.download.manager.showWhenStarting",False)
options.set_preference("browser.download.dir","c:\\downloads")

profile = webdriver.FirefoxProfile()
profile.accept_untrusted_certs = True

driver = webdriver.Firefox(firefox_binary=geckk,options=options, firefox_profile=profile)

### Download
driver.get("https://datawrapper.dwcdn.net/tr5bJ/16/")
soup=BeautifulSoup(driver.page_source, 'html.parser')

### Get the data required (didn't manage to do differently than browsing across text)
data_cursor_start=soup.text.find('chartData: "')
data_cursor_stop=data_cursor_start+soup.text[data_cursor_start:].find('",')
zoom=str(soup.text)[data_cursor_start:data_cursor_stop]

### Create DataFrame
table_lines=zoom.split('\\n')
line_array=[]
for each_line in table_lines[1:]:
    line=each_line.split('\\t')
    line_array += line
line_matrix= [line_array[x:x+5] for x in range(0, len(line_array),5)]

df=pd.DataFrame(line_matrix, columns=['date', 'ncumul_hosp','ncumul_released','ncumul_deceased','ncumul_conf'])
df['date']=pd.to_datetime(df['date'],yearfirst=True)
df['abbreviation_canton_and_fl']='VD'
df['time']=np.NaN
df['ncumul_tested']=np.NaN
df['ncumul_ICU']=np.NaN
df['ncumul_vent']=np.NaN
df['source']='https://datawrapper.dwcdn.net/tr5bJ/16/'

### Format & Save CSV
new_order=['date','time','abbreviation_canton_and_fl','ncumul_tested','ncumul_conf',\
           'ncumul_hosp','ncumul_ICU','ncumul_vent','ncumul_released','ncumul_deceased','source']
df=df.T.reindex(new_order).T.set_index('date')

df.to_csv('COVID19_Fallzahlen_Kanton_VD_total.csv')

Primitive scraper for AG

echo AG; URL=$(curl --silent 'https://www.ag.ch/de/themen_1/coronavirus_2/lagebulletins/lagebulletins_1.jsp' | sed -E -e 's/<li>/\n<li>/g' | grep Bulletin | grep pdf | grep href | awk -F '"' '{print $6;}' | head -1); d=$(curl --silent "https://www.ag.ch/${URL}" | pdftotext - - | egrep -A 2 "(Aarau, .+Uhr|Stand [A-Za-z]*, [0-9]+)"); echo "Scraped at $(date --iso-8601=seconds)"; echo -n "Date and time: "; echo "$d" | grep Aarau, | sed -E -e 's/.*, (.+)/\1/'; echo -n "Confirmed cases: "; echo "$d" | egrep '^[0-9]+$'
AG
Scraped at 2020-03-21T17:36:17+00:00
Date and time: 20. März 2020 15.00 Uhr
Confirmed cases: 168

Primitive scraper for UR

echo UR; d=$(curl --silent "https://www.ur.ch/themen/2920" | grep "Personen gestiegen"); echo "Scraped at $(date --iso-8601=seconds)"; echo -n "Date and time: "; echo "$d" | sed -E -e 's/^.*\(Stand[A-Za-z ]*, ([^\)]+)\).*$/\1/' ; echo -n "Confirmed cases: "; echo "$d" | sed -E -e 's/^.* ([0-9]+) Personen gestiegen.*$/\1/'
UR
Scraped at 2020-03-21T16:57:49+00:00
Date and time: 21. März 2020, 8.00 Uhr
Confirmed cases: 12

Primitive scraper for JU

echo JU; d=$(curl --silent --user-agent "Mozilla Firefox Mozilla/5.0; openZH covid_19 at github" "https://www.jura.ch/fr/Autorites/Coronavirus/Accueil/Coronavirus-Informations-officielles-a-la-population-jurassienne.html" | egrep -B 2 'Situation .*2020'); echo "Scraped at $(date --iso-8601=seconds)"; echo -n "Date and time: "; echo "$d" | grep Situation | sed -E -e 's/^.*Situation (.+)<\/em.*$/\1/'; echo -n "Confirmed cases: "; echo "$d" | egrep "<p.*<strong>[0-9]+" | sed -E -e 's/^.*>([0-9]+)<.*$/\1/'
JU
Scraped at 2020-03-21T17:21:17+00:00
Date and time: 20 mars 2020 (17h)
Confirmed cases: 29

Scraper FR

Received an email answer from FR that they will start putting the numbers on their website as of today (time unclear) and keep updating them from today onwards.

Their email was:
Bonjour Madame
Nous avons bien reçu votre demande.
A ce sujet, dès aujourd’hui les chiffres concernant le canton de Fribourg seront publiés sur le site de l’Etat de Fribourg (www.fr.ch) et seront régulièrement mis à jour.

This serves as a reminder to myself or others to check the website periodically today.

Rmd or Jupyter notebooks

Does someone know of Rmd or Jupyter notebooks that read and visualise the data?

If yes it could be nice to include one or two of them here. We can also add a mybinder.org link to let people run them.

Great work!

Enable Esri Data Format

Es wäre gut, wenn die Daten in diesem Format (gleich wie Italien und China) aufbereitet werden:

Pro Tag eine Zeile für alle Kantone
https://github.com/zdavatz/covid19_ch/blob/master/data-cantons-csv/dd-covid19-ch-cantons-20200318-example.csv

Pro Tag eine Zeile für die ganze Schweiz.
https://github.com/zdavatz/covid19_ch/blob/master/data-switzerland-csv/dd-covid19-ch-switzerland-20200318-example.csv

Dann kann man die Daten direkt ins Esri reinziehen. Siehe z.B. Italien, JohnHopkins

Primitive scraper for LU

echo LU; d=$(curl --silent 'https://gesundheit.lu.ch/themen/Humanmedizin/Infektionskrankheiten/Coronavirus' | grep "Im Kanton Luzern gibt es" | awk -F '>' '{print $3;}'); echo "Scraped at $(date --iso-8601=seconds)"; echo -n "Date and time: "; echo "$d" | sed -E -e 's/^.*Stand: (.+)(Uhr)?\).+$/\1/'; echo -n "Confirmed cases: "; echo "$d" | sed -e 's/ /\n/g' | egrep '[0-9]+' | head -1;
LU
Scraped at 2020-03-21T15:21:08+00:00
Date and time: 21. M&auml;rz 2020, 11:00 Uhr
Confirmed cases: 109

Open Source Helps!

Thanks for your work to help the people in need! Your site has been added! I currently maintain the Open-Source-COVID-19 page, which collects all open source projects related to COVID-19, including maps, data, news, api, analysis, medical and supply information, etc. Please share to anyone who might need the information in the list, or will possibly contribute to some of those projects. You are also welcome to recommend more projects.

http://open-source-covid-19.weileizeng.com/

Cheers!

Data from SZ and FR missing

Die Kantone Schwyz (77 Infizierte, bisher kein Todesfall) und Freiburg (202 Fälle, 4 Verstorbene) liefern ihre Zahlen nur auf Anfrage. Da keine flächendeckenden Tests gemacht würden, seien die Fallzahlen «keine relevante Zahl», begründet Freiburg seinen passive Informationspolitik.

see: https://www.blick.ch/news/politik/aktuelle-coronavirus-zahlen-der-schweiz-so-informieren-kantone-ueber-die-corona-fallzahlen-id15810865.html

@baryluk all other Cantons should now be available on the website.

Primitive scraper for GR

echo GR; d=$(curl --silent "https://www.gr.ch/DE/institutionen/verwaltung/djsg/ga/coronavirus/info/Seiten/Start.aspx" | egrep ">Fallzahlen|Best(ä|&auml;)tigte F(ä|&auml;)lle|Personen in Spitalpflege|Verstorbene Personen"); echo "Scraped at $(date --iso-8601=seconds)"; echo -n "Date and time: "; echo "$d" | grep Fallzahlen | sed -E -e 's/.*Fallzahlen ([^<]+)<.*/\1/';  echo -n "Confirmed cases: "; echo "$d" | egrep "Best(ä|&auml;)tigte F(ä|&auml;)lle" | sed -E -e 's/( |<)/\n/g' | egrep '[0-9]+' | head -1; echo -n "Deaths: "; echo "$d" | grep "Verstorbene" | sed -E -e 's/( |<)/\n/g' | egrep '[0-9]+' | head -1
GR
Scraped at 2020-03-21T15:43:48+00:00
Date and time: 20.03.2020
Confirmed cases: 213
Deaths: 3

Primitive scraper for BE

echo BE; d=$(curl --silent 'https://www.besondere-lage.sites.be.ch/besondere-lage_sites/de/index/corona/index.html' | grep -A 20 'table cellspacing="0" summary="Laufend aktualisierte Zahlen'); echo "Scraped at $(date --iso-8601=seconds)"; echo -n "Date and time: "; echo "$d" | grep "Stand:" | sed -E -e 's/^.*Stand: (.+)\).*$/\1/'; echo -n "Confirmed cases: "; echo "$d" | egrep '<td .*<strong>[0-9]+<' | sed -E -e 's/.*>([0-9]+)<.*/\1/'; echo -n "Deaths: "; echo "$d" | egrep '<td[^<>]*>[0-9]+</td>' | sed -E -e 's/.*>([0-9]+)<.*/\1/';
BE
Scraped at 2020-03-21T16:34:20+00:00
Date and time: 21. März 2020
Confirmed cases: 377
Deaths: 3

Add neighbouring countries' regions

As the borders are not closed (and even if closed, wouldn't be foolproof), it would be interesting to collect and display the contamination rates of neighbouring France/Italy/Germany/Austria; ideally not the country-wide average, but the border regions only.

For France, one can find the numbers for Ain and Haute-Savoie faily clearly in "point de situation"-titled PDFs:
https://www.auvergne-rhone-alpes.ars.sante.fr/liste-communiques-presse
Jura, Belfort, Doubs don't have clear separate numbers so far:
https://www.bourgogne-franche-comte.ars.sante.fr/liste-communiques-presse
Haut-Rhin has again clear numbers in PDFs here:
https://www.grand-est.ars.sante.fr/liste-communiques-presse?field_archive_ars_value=0

An official repository here:
https://www.data.gouv.fr/fr/datasets/donnees-relatives-a-lepidemie-du-covid-19/#_
Otherwise an unofficial data repository is here:
https://github.com/opencovid19-fr/data

Italy has a faily complete official repository from which one could pull numbers:
https://github.com/pcm-dpc/COVID-19

Found this for Germany, at Landkreise level:
https://experience.arcgis.com/experience/478220a4c454480e823b17327b2bf1d4/page/page_1/
Here is an unofficial repository, but I can't vouch for it:
https://github.com/marlon360/rki-covid-api

And this for Austria:
https://info.gesundheitsministerium.at/
Again, maybe scraping from Wikipedia is more feasible, with the risk it may be unreliable or defaced.

GE, SZ, VG failed

@baryluk

I ran it again, now I get:

GE - - - FAILED
SZ - - - FAILED
VD - - - FAILED

Can you add a verbose option, to show where it fails? Does that make sense?

Primitive scraper for TG

echo TG; d=$(curl --silent https://www.tg.ch/news/fachdossier-coronavirus.html/10552 | egrep "<li>Anzahl bestätigter|<em>Stand"); echo "Scraped at $(date --iso-8601=seconds)"; echo -n "Date and time: "; echo "$d" | sed -E -e 's/^.*Stand ([^<]+)<.*$/\1/'; echo -n "Confirmed cases: "; echo "$d" | grep 'Anzahl' | sed -E -e 's/.* ([0-9]+)<.*$/\1/'
TG
Scraped at 2020-03-21T15:30:42+00:00
Date and time: 21.3.20
Confirmed cases: 56

Primitive scraper for GE

echo GE; d=$(curl --silent "https://www.ge.ch/document/point-coronavirus-maladie-covid-19/telecharger" | pdftotext - - | egrep "Dans le canton de Genève|Actuellement.*cas ont|décédées"); echo "Scraped at $(date --iso-8601=seconds)"; echo -n "Date and time: "; echo "$d" | grep "Dans le" | sed -E -e 's/.*\((.*)\).*$/\1/'; echo -n "Confirmed cases: "; echo "$d" | grep "cas ont" | sed -E -e 's/( |<)/\n/g' | egrep '[0-9]+' | head -1; echo -n "Deaths: "; echo "$d" | grep "décédées" | sed -E -e 's/^.*([0-9]+) [^,]* décédées.*$/\1/'
GE
Scraped at 2020-03-21T15:58:51+00:00
Date and time: 20.03 à 8h00
Confirmed cases: 873
Deaths: 7

using pdftotext from poppler-utils (using version 0.71.0).

There are some extra data in pdf, like number of hospitalized cases, number of cases with care, and number of cases with intensive care.

There is also this pdf, with a nice table and two day history (yesterday and today)
https://www.ge.ch/document/covid-19-situation-epidemiologique-geneve/telecharger
but the table and graphs are raster images, so really not conductive to parsing. It can be done, but it is better to ask them to improve the website instead.

Primitive scraper for ZH

echo ZH; d=$(curl --silent https://gd.zh.ch/internet/gesundheitsdirektion/de/themen/coronavirus.html | egrep "Im Kanton Zürich sind zurzeit|\\(Stand"); echo "Scraped at $(date --iso-8601=seconds)"; echo -n "Date and time: "; echo "$d" | sed -E -e 's/.*Stand (.+) Uhr.*/\1/'; echo -n "Confirmed cases: "; echo "$d" | sed -e 's/ /\n/g' | egrep '[0-9]+' | head -1
ZH
Scraped at 2020-03-21T15:25:55+00:00
Date and time: 20.3.2020, 16.30
Confirmed cases: 773

einheitliche Datenfiles der Kantone

Besteht die Möglichkeit, dass wir für alle Kantone ein einheitliches CSV-Format kriegen? Am besten wäre ein Tool, welches das Format validiert vor dem Upload.

Primitive scraper for NE

echo NE; d=$(curl --silent "https://www.ne.ch/autorites/DFS/SCSP/medecin-cantonal/maladies-vaccinations/Pages/Coronavirus.aspx" | grep 'Nombre de cas confirmés'); echo "Scraped at $(date --iso-8601=seconds)"; echo -n "Date and time: "; echo "$d" | sed -E -e 's/^.*>Neuchâtel(&#160;)* +([^<]+)<\/span>.*$/\2/'; echo -n "Confirmed cases: "; echo "$d" | sed -E -e 's/<br>/\n/g' | grep -A 3 ">Neuchâtel" | egrep "Nombre de .* confirmés" | sed -E -e 's/^.*[^0-9]+([0-9]+) pers.*$/\1/'; echo -n "Deaths: "; echo "$d" | sed -E -e 's/<br>/\n/g' | grep -A 3 ">Neuchâtel" | egrep "Nombre.* décès" | head -1 | sed -E -e 's/^.*[^0-9]+ ([0-9]+)( pers.*|<\/strong).*$/\1/'
NE
Scraped at 2020-03-21T17:11:14+00:00
Date and time: 21.03.2020, 15h30
Confirmed cases: 177
Deaths: 2

better monitoring of the updating process

As more and more people is involved (yay!) in updating the data and building scrapers, we might have to start thinking about how to monitor the updating process, and think about how we can ensure that the data is being updated even if one of the scrapers fails or someone forgets to check for new data.

Do you habe suggestions how we could manage this @metaodi @baryluk @ebeusch @herrstucki @andreasamsler @zdavatz ?

A table in the Readme (or somewhere else?) which is refreshed automatically after each push to a single file might help, similar to what @baryluk has created here:

#61

The table we have now is built by hand.

Bitte Files in den Ordnern unterscheiden

Könnt Ihr bitte die Files in unterschiedlichen Ordner sammeln, Vorschlag:

  • fallzahlen_kanton_total_csv -> für alle Updates. Ein File pro Kanton.
  • fallzahlen_kanton_alter_geschlecht_csv -> für detailliertere Infos. Ein File pro Kanton.

Die Files in den Ordner sollten validiert sein, d.h. gleiche Anzahl Spalten, gleiche Spalten Titel.

NW scraper fails

NW changed the wording on their site.

New:

Bisher sind 42 Personen im Kanton Nidwalden positiv auf das Coronavirus getestet worden

Terminology clarification

Dear all,

thanks a lot for keeping this repo up-to-date! There seems to be some confusion regarding the terminology: the infection rates refer to the infection with SARS-Cov-2, whereas the disease caused by the virus is called Covid-19.

Best wishes,
-Filippo

Primitive scraper for BS

echo BS; URL=$(curl --silent https://www.gd.bs.ch/ | egrep 'Tagesbulletin.*Corona' | grep href | head -1 | awk -F '"' '{print $2;}'); d=$(curl --silent "https://www.gd.bs.ch/${URL}" | grep "positive Fälle"); echo "Scraped at $(date --iso-8601=seconds)"; echo -n "Date and time: "; echo "$d" | sed -E -e 's/^.*Stand [A-Za-z]*,? (.+), insgesamt.*$/\1/'; echo -n "Confirmed cases: "; echo "$d" | sed -E -e 's/^.*insgesamt ([0-9]+) positive.*$/\1/'
BS
Scraped at 2020-03-21T16:47:22+00:00
Date and time: 21. März 2020, 10 Uhr
Confirmed cases: 299

Very sloppy. It goes to the gd.bs.ch homepage, and takes the first bulletin, then follows it and try to parse. It looks last 6 days the text was relatively consistent in the form, so the script should work fine.

totals file vs cantonal files

Thanks for aggregating these data. We are using the file
https://github.com/openZH/covid_19/blob/master/COVID19_Cases_Cantons_CH_total.csv
to feed data into neherlab.org/covid19/

However, it seems the file we are using is not in sync with the files in

https://github.com/openZH/covid_19/tree/master/fallzahlen_kanton_total_csv

please advise as to files we should be using and what is kept up-to-date.

Our parser is here:
https://github.com/neherlab/covid19_scenarios_data/blob/master/parsers/switzerland.py

A quick script for computing latest totals

In the directory with data:

for f in *.csv; do awk -F , '{if ($5) { print $1, $3, $5; }}' "$f" | tail -1; done | awk 'BEGIN { sum = 0; } { sum += $3; } END { print sum; }'

6262

Getting latest confirmed cases by subdivision:

for f in *.csv; do awk -F , '{if ($5) { print $1, $3, $5; }}' "$f" | tail -1; done |  sort -r -n -k 3
2020-03-20 VD 1432
2020-03-21 TI 918
2020-03-20 GE 873
2020-03-20 ZH 773
2020-03-20 BE 377
2020-03-21 BS 299
2020-03-21 BL 282
2020-03-20 VS 282
2020-03-20 GR 213
2020-03-20 AG 165
2020-03-20 NE 159
2020-03-21 LU 109
2020-03-20 SG 98
2020-03-21 TG 56
2020-03-20 ZG 48
2020-03-20 FL 40
2020-03-20 JU 29
2020-03-20 NW 28
2020-03-19 GL 17
2020-03-20 SH 14
2020-03-15 SZ 13
2020-03-21 UR 12
2020-03-18 AR 11
2020-03-09 FR 11
2020-03-14 AI 2
2020-03-13 OW 1

Feel free to include it in the repo, or readme. Public Domain. Signed off, Witold Baryluk.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.