Git Product home page Git Product logo

csvs-to-sqlite's Introduction

csvs-to-sqlite

PyPI Changelog Tests License

Convert CSV files into a SQLite database. Browse and publish that SQLite database with Datasette.

Basic usage:

csvs-to-sqlite myfile.csv mydatabase.db

This will create a new SQLite database called mydatabase.db containing a single table, myfile, containing the CSV content.

You can provide multiple CSV files:

csvs-to-sqlite one.csv two.csv bundle.db

The bundle.db database will contain two tables, one and two.

This means you can use wildcards:

csvs-to-sqlite ~/Downloads/*.csv my-downloads.db

If you pass a path to one or more directories, the script will recursively search those directories for CSV files and create tables for each one.

csvs-to-sqlite ~/path/to/directory all-my-csvs.db

Handling TSV (tab-separated values)

You can use the -s option to specify a different delimiter. If you want to use a tab character you'll need to apply shell escaping like so:

csvs-to-sqlite my-file.tsv my-file.db -s $'\t'

Refactoring columns into separate lookup tables

Let's say you have a CSV file that looks like this:

county,precinct,office,district,party,candidate,votes
Clark,1,President,,REP,John R. Kasich,5
Clark,2,President,,REP,John R. Kasich,0
Clark,3,President,,REP,John R. Kasich,7

(Real example taken from the Open Elections project)

You can now convert selected columns into separate lookup tables using the new --extract-column option (shortname: -c) - for example:

csvs-to-sqlite openelections-data-*/*.csv \
    -c county:County:name \
    -c precinct:Precinct:name \
    -c office -c district -c party -c candidate \
    openelections.db

The format is as follows:

column_name:optional_table_name:optional_table_value_column_name

If you just specify the column name e.g. -c office, the following table will be created:

CREATE TABLE "office" (
    "id" INTEGER PRIMARY KEY,
    "value" TEXT
);

If you specify all three options, e.g. -c precinct:Precinct:name the table will look like this:

CREATE TABLE "Precinct" (
    "id" INTEGER PRIMARY KEY,
    "name" TEXT
);

The original tables will be created like this:

CREATE TABLE "ca__primary__san_francisco__precinct" (
    "county" INTEGER,
    "precinct" INTEGER,
    "office" INTEGER,
    "district" INTEGER,
    "party" INTEGER,
    "candidate" INTEGER,
    "votes" INTEGER,
    FOREIGN KEY (county) REFERENCES County(id),
    FOREIGN KEY (party) REFERENCES party(id),
    FOREIGN KEY (precinct) REFERENCES Precinct(id),
    FOREIGN KEY (office) REFERENCES office(id),
    FOREIGN KEY (candidate) REFERENCES candidate(id)
);

They will be populated with IDs that reference the new derived tables.

Installation

$ pip install csvs-to-sqlite

csvs-to-sqlite now requires Python 3. If you are running Python 2 you can install the last version to support Python 2:

$ pip install csvs-to-sqlite==0.9.2

csvs-to-sqlite --help

Usage: csvs-to-sqlite [OPTIONS] PATHS... DBNAME

  PATHS: paths to individual .csv files or to directories containing .csvs

  DBNAME: name of the SQLite database file to create

Options:
  -s, --separator TEXT            Field separator in input .csv
  -q, --quoting INTEGER           Control field quoting behavior per csv.QUOTE_*
                                  constants. Use one of QUOTE_MINIMAL (0),
                                  QUOTE_ALL (1), QUOTE_NONNUMERIC (2) or
                                  QUOTE_NONE (3).

  --skip-errors                   Skip lines with too many fields instead of
                                  stopping the import

  --replace-tables                Replace tables if they already exist
  -t, --table TEXT                Table to use (instead of using CSV filename)
  -c, --extract-column TEXT       One or more columns to 'extract' into a
                                  separate lookup table. If you pass a simple
                                  column name that column will be replaced with
                                  integer foreign key references to a new table
                                  of that name. You can customize the name of
                                  the table like so:     state:States:state_name
                                  
                                  This will pull unique values from the 'state'
                                  column and use them to populate a new 'States'
                                  table, with an id column primary key and a
                                  state_name column containing the strings from
                                  the original column.

  -d, --date TEXT                 One or more columns to parse into ISO
                                  formatted dates

  -dt, --datetime TEXT            One or more columns to parse into ISO
                                  formatted datetimes

  -df, --datetime-format TEXT     One or more custom date format strings to try
                                  when parsing dates/datetimes

  -pk, --primary-key TEXT         One or more columns to use as the primary key
  -f, --fts TEXT                  One or more columns to use to populate a full-
                                  text index

  -i, --index TEXT                Add index on this column (or a compound index
                                  with -i col1,col2)

  --shape TEXT                    Custom shape for the DB table - format is
                                  csvcol:dbcol(TYPE),...

  --filename-column TEXT          Add a column with this name and populate with
                                  CSV file name

  --fixed-column <TEXT TEXT>...   Populate column with a fixed string
  --fixed-column-int <TEXT INTEGER>...
                                  Populate column with a fixed integer
  --fixed-column-float <TEXT FLOAT>...
                                  Populate column with a fixed float
  --no-index-fks                  Skip adding index to foreign key columns
                                  created using --extract-column (default is to
                                  add them)

  --no-fulltext-fks               Skip adding full-text index on values
                                  extracted using --extract-column (default is
                                  to add them)

  --just-strings                  Import all columns as text strings by default
                                  (and, if specified, still obey --shape,
                                  --date/datetime, and --datetime-format)

  --version                       Show the version and exit.
  --help                          Show this message and exit.

csvs-to-sqlite's People

Contributors

betatim avatar dannguyen avatar janimo avatar obi1kenobi avatar simonw avatar williamrowell avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

csvs-to-sqlite's Issues

".\" added to table name in SQLITE database file

Hello Simon, thanks a lot for your great tool and sharing.

I am using it to convert a bunch of csv files in a folder. i get a .db file as output; anyhow my table names are marked with a "."chars which do not allow me to perform further sqlite queries...could you please advice me how to avoid this issue?
Below my cmd input and output:
C:~Documents\Profesional\2018-02-15>csvs-to-sqlite C:~Documents\Profesional\2018-02-15 output.db
extract_columns=()
Loaded 4 dataframes
c:\users\joel0\appdata\local\programs\python\python36-32\lib\site-packages\pandas\core\generic.py:1362: UserWarning: The spaces in these column names will not be changed. In pandas versions < 0.14, spaces were converted to underscores.
chunksize=chunksize, dtype=dtype)
Created output.db from 4 CSV files

thanks in advance

best_fts_version() cannot detect FTS5

Bug in this code:

def best_fts_version():
"Discovers the most advanced supported SQLite FTS version"
conn = sqlite3.connect(':memory:')
for fts in ('FTS5', 'FTS4', 'FTS3'):
try:
conn.execute('CREATE VIRTUAL TABLE v USING {} (t TEXT);'.format(fts))
return fts
except sqlite3.OperationalError:
continue
return None

In [5]: sqlite3.connect(":memory:").execute("CREATE VIRTUAL TABLE v USING FTS5 (text s)")
---------------------------------------------------------------------------
OperationalError                          Traceback (most recent call last)
<ipython-input-5-e28cb15a3bcb> in <module>()
----> 1 sqlite3.connect(":memory:").execute("CREATE VIRTUAL TABLE v USING FTS5 (text s)")
OperationalError: unrecognized column option: s
In [6]: sqlite3.connect(":memory:").execute("CREATE VIRTUAL TABLE v USING FTS5 (s)")
Out[6]: <sqlite3.Cursor at 0x102d761f0>

So FTS5 currently always fails to be detected.

Provide a means for importing CSVs contained inside a zip file

If a large number of regular CSV files are provided within a zip or tar file, it would be useful to be able to use csvs-to-sqlite to:

  • list any CSV files inside the zip file;
  • preview the first few lines of a CSV file contained within a zip file
  • import CSV files from a zip file directly into SQLite

When input CSV has a column named 'table_name', an error is raised: "sqlite3.InterfaceError: Error binding parameter 0"

(This is for csvs-to-sqlite=1.0, using Python 3.7)

How to reproduce the error

Given a CSV that has table_name as one of its column headers, e.g. this OSHA data dictionary.csv

mkdir -p /tmp/csvtest && cd /tmp/csvtest
curl -LO https://raw.githubusercontent.com/storydrivendatasets/osha-enforcement-catalog/master/data/collected/osha/stash/osha_data_dictionary/osha_data_dictionary.csv

csvs-to-sqlite osha_data_dictionary.csv testdb.sqlite

csvs-to-sqlite throws an error:

  File "../python3.7/site-packages/csvs_to_sqlite/cli.py", line 198, in cli
    if replace_tables and table_exists(conn, df.table_name):
  File "../python3.7/site-packages/csvs_to_sqlite/utils.py", line 260, in table_exists
    [table],
sqlite3.InterfaceError: Error binding parameter 0 - probably unsupported type.

How the error is resolved

If the input CSV's table_name header is changed to anything else, e.g. tbl_name, then csvs-to-sqlite works as expected

Suggested fix

Haven't looked at the code yet, but my initial thought is that it'd be nice if csvs-to-sqlite silently handled this somehow, maybe changing table_name (and any other reserved words), to a string with a specified prefix/suffix, e.g. table_name__fixed__. Then again, maybe silently handling this kind of thing can lead to messy usecases later?

-d flag creates column as TEXT

I'm using the -d flag on a column that contains iso formatted dates (e.g. '2020-07-02') in my csv file, but when this is converted to a sqlite database the column is created as a TEXT. I'm using a tool (metabase) that reads the schema metadata so I'd like this column created as a DATE datatype if possible. Is there a way to force this?

Add option to pass na_filter into pd.read_csv() - Dealing with CSV containing NA as string

I have a CSV detailing info about airports. The country code column is encoded using 2 character ISO-3166 codes. https://en.wikipedia.org/wiki/List_of_ISO_3166_country_codes

It turns out that the code for Namibia is "NA" and this results in the values being stripped as it converts from CSV to SQLite DB.

I was able to solve this by adding na_filter=False to the pd.read_csv() call in utils.py

return pd.read_csv(

Would you consider adding an option to allow this flag to be passed into the pd.read_csv call?

Thanks
Darren

Lookup tables should be maintained directly in SQLite

When evaluating -c we currently use a temporary table maintained in Python space:

def id_for_value(self, value):
if pd.isnull(value):
return None
try:
return self.value_to_id[value]
except KeyError:
id = self.next_id
self.id_to_value[id] = value
self.value_to_id[value] = id
self.next_id += 1
return id

For handling larger CSV files (#16) this would work much better if it was a SQLite table that was queried and updated as we process data. This would also help make lookup tables re-usable across multiple CSVs across several runs of the command.

Multiple tables within the same CSV

  |   |   |   |  
Device ID | Last Time Seen | ESSID | Longitude | Latitude
data | data | data | data | data
  |   |   |   |  
Node ID | Last Time Seen | SSID | Longitude | Latitude
Data | data | data | data | data

Hi I have a few tables that are produced via a program that I want to convert over to SQLite could there be a feature in which a CSV like the example one shown could result in a SQLite with 2 separate tables being produced. Within the CSV the tables are only separated via a singular blank row.

Thanks

`--split` for splitting extracted columns on their values

https://data.sfgov.org/Economy-and-Community/Mobile-Food-Facility-Permit/rqzj-sfat

Say for example there's a FoodItems column in the CSV that has data in it like this:

Cold Truck: Cheeseburgers: Burgers: Chicken Bake: Chili Dogs: Hot Dogs: Corn Dogs: Cup of Noodles: Egg Muffins: Tamales: Hot Sandwiches Quesadillas: Gatorade: Juice: Soda: Mikl: Coffee: Hot Cocoa: Hot Tea: Flan: Fruits: Fruit Salad: Yogurt: Candy: Chips: Donuts: Cookies: Granola: Muffins & Various Drinks & Pre-Packaged Snacks.

Running the following could create a lookup table with those individual broken out items in it:

csvs-to-sqlite -c FoodItems --split FoodItems ": "

So the --split option takes two arguments - the name of the column, and the separator to split on.

leading zeroes / --shape args

Hi @simonw I'm having a hard time applying the shape function. I can use the native .import function, but that brings in everything as text, but I'd love to specify that the ZCTA columns have leading zeroes while other int columns don't. Edited to say this is easy to work around by just making the sqlite file by "normal" means.

That said, this and #42 both have leading zero issues. I was running version 0.9.2

Running csvs-to-sqlite --shape "ZCTA5:ZCTA5(TEXT)" zctas.csv zctas.db on a csv like the following results in the output having the leading zeroes removed although using the shape arg does result in the type being string (without specifying type it is imported as an integer, which also removes the leading zeroes).

ZCTA5
00601
00601
00602
00603
00606
00606
00610
00612
00616
00617
00622
00623
00624
00624
00627
00631
00631
00637
00637
00638

multiple CSV files and foreign keys

Hello,
In the case when you want to import for instance: people.csv and coutry.csv where people.csv contains:

id, name, country_id
1, John, 1
2, Paul, 1
3, René, 2

and country.csv contains:

id, name
1, United Kingdom
2, France

Would it be possible to declare country_id as a foreign key of id in country.csv ?
Thanks,

-d / -dt options for parsing columns as dates

SQLite prefers yyyy-mm-dd style dates - it can then sort them easily plus it has date functions which know how to work with them.

csvs-to-sqlite should have arguments that let the user specify columns that are known to be dates (or datetimes). It can then parse those dates and output them in the preferred format.

https://github.com/scrapinghub/dateparser looks like a good library for this: it can handle date parsing across many different languages, making it a potentially better fit than http://dateutil.readthedocs.io/en/stable/parser.html

Allow appending files to an existing SQLite database

Tool currently quits with an error if you try this.

Should it be possible to update existing tables with new data? Not sure about this. We certainly can’t handle schema changes. We would need to be told a column to treat as a “primary key”.

Given how cheap it is to recreate the database from scratch I’m inclined to say it’s not worth bothering with table updates.

csvs-to-sqlite 2.0: dropping Pandas in favour of sqlite-utils

My sqlite-utils library has evolved to the point where I think it would make a good foundation for the next version of csvs-to-sqlite.

The main feature I'm excited about here is being able to handle giant CSV files - right now they have to be loaded into memory by Pandas, but sqlite-utils has similar functionality which handles them as streams, reducing the amount of memory needed to consume a huge file.

I intend to keep as much of the CLI API the same for the new version, but this is a big change so it's likely some cases will break. As such, I intend to keep the 1.x branch around (and maintained with bug fixes) for users who find that 2.0 doesn't work for them.

I'll pin this issue for a few weeks so people can comment on this plan before I start executing.

--shape option for specifying the "shape" of the resulting table

This option will allow you to tell the command exactly what columns should be created in the new table and what their types should be.

For example:

csvs-to-sqlite votes.csv votes.db --shape "county:Cty,votes:Vts(REAL)"

This will produce a table with just two columns: Cty and Vts. Those columns will correspond to the county and votes columns in the original CSV.

The Cty column will use the default type detected by pandas - but the Vts column will be forced to be a REAL column instead.

delete _fts tables with --replace-tables

Hello,
I suggest the deletion of the _fts tables in case of use of the -f and --replace-tables options. Otherwise, an error occurs warning that the _fts tables already exist when csvs-to-sqlite is run on existing tables with FTS support.
Thank you,

No create index option - please create new release

Hi Simon,

Love this wonderful tool! Thanks a million for using it and prompting sqlite!

The README shows an option for creating an index:
https://github.com/simonw/csvs-to-sqlite/blob/master/README.md

-i, --index TEXT Add index on this column (or a compound index with -i col1,col2)

I don't seem to have that version in my csvs-to-sqlite:

% csvs-to-sqlite --help
Usage: csvs-to-sqlite [OPTIONS] PATHS... DBNAME

  PATHS: paths to individual .csv files or to directories containing .csvs

  DBNAME: name of the SQLite database file to create

Options:
  -s, --separator TEXT       Field separator in input .csv
  --replace-tables           Replace tables if they already exist
  -c, --extract-column TEXT  One or more columns to 'extract' into a separate
                             lookup table. If you pass a simple column name
                             that column will be replaced with integer foreign
                             key references to a new table of that name. You
                             can customize the name of the table like so:

                                 --extract-column state:States:state_name

                             This will pull unique values from the 'state'
                             column and use them to populate a new 'States'
                             table, with an id column primary key and a
                             state_name column containing the strings from the
                             original column.
  -f, --fts TEXT             One or more columns to use to populate a full-
                             text index
  --version                  Show the version and exit.
  --help                     Show this message and exit.
% csvs-to-sqlite --version
csvs-to-sqlite, version 0.7

Is this an option in an unreleased build of csvs-to-sqlite? If so, when do you think it would be ready?

Thanks!

Should accept URLs to CSV files as well as paths

That way we can use this tool to suck one or more CSVs directly from the internet and turn them into a SQLite database.

import pandas as pd
pd.read_csv('https://raw.githubusercontent.com/openelections/openelections-data-ms/master/2013/20130205__ms__special__general__hinds__state_senate__28__precinct.csv')

This works already - so pd.read_csv is capable of this. We just need to teach the command-line option to accept URLs in addition to paths.

Figure out a mechanism for interpreting dates as always in the 1900s

I created https://csvs-to-sqlite-date-demo.now.sh/antiquities-317d506/actions.under.antiquities.act like this:

$ csvs-to-sqlite actions.under.antiquities.act.csv antiquities.db -d date

Using the CSV from here: https://github.com/fivethirtyeight/data/blob/master/antiquities-act/actions_under_antiquities_act.csv

Just one problem:

2018-04-24 at 9 02 am

It would be nice if there was a way to tell csvs-to-sqlite "if a year is two digits, treat it as being in the 1900s".

Process "Killed" Loading Large CSV in Binderhub

Trying to load a 1Gb / 2.5 million rows CSV file into a table on resource limited MyBinder VM, I kept getting a Killed message on the process.

What did work fine was a chunked load using pandas, although without predefining the table is does mean the table structure could end up being anything...:

for chunk in pd.read_csv(datafile, chunksize=2000):
    chunk.to_sql(name=tablename, con=conn, if_exists="append", index=False) 

Extract columns to existing table with different primary key column than 'id'?

I'm trying to extract columns using --extract-columns "src_col:dest_table:dest_value_col" but in my dest_table the ID is not 'id' but "tablename_id". What's the best way to get around this issue? I noticed this in the CLI code but wasn't sure about the best way to change it to allow a different '(id)':

FOREIGN KEY ("{}") REFERENCES [{}](id)'.format(column, table)

CSVs with lines ending \r\n result in missing first column and shift in data columns

I the CSV file uses \r\n as it's row ending, the csvs-to-sqlite misses the first column (of data) and shifts the columns "to the left" (and the last column is therefore empty)

The \r\n ending is the default behaviour of Python's csv.writerow, see https://docs.python.org/3/library/csv.html#csv.Dialect.lineterminator

E.g.:

CSV:

id,value,number
1,3,2
2,1,7
3,4,1

results in SQLITE table:

id,value,number
3,2,
1,7,
4,1,

Process files sequentially to decrease memory footprint

Unless the data in the input files may need to be processed together (in some of the foreign columns cases maybe?), it would be better to write the sqlite table right after creating the dataframe instead of gathering all dataframes. Right now when importing many files that together contain a lot of data OOM occurs. From what I see freeing Pandas dataframes difficult/impossible
https://stackoverflow.com/questions/39100971/how-do-i-release-memory-used-by-a-pandas-dataframe

so this may not help either (it did not in a simple test I did)

Right now the only way to import many files is shell scripting around this tool and importing a single file per invocation.

csvs-to-sqlite has a hard requirement on package versions.

Currently version required is hardcoded:

csvs-to-sqlite/setup.py

Lines 22 to 28 in dccbf65

install_requires=[
'click==6.7',
'dateparser==0.7.0',
'pandas==0.20.3',
'py-lru-cache==0.1.4',
'six',
],

The latest Pandas version is 0.21.0, this makes csvs-to-sqlite unusable with it by default and unusable with any package that requires the latest pandas.

Is there a reason why a specific version is enforced and not "0.20.3 or later"?

-f and -c don't work with single table to multiple columns

Example CSV:

film,actor_1,actor_2
The Rock,Sean Connery,Nicolas Cage
National Treasure,Nicolas Cage,Diane Kruger
Troy,Diane Kruger,Orlando Bloom

This command:

csvs-to-sqlite films.csv films.db \
    -c film -c actor_1:actors:name -c actor_2:actors:name \
    -f film -f actor_1 -f actor_2

Returns this error:

 ambiguous column name: actors.name

Interpret all columns as TEXT data type

Is it possible to interpret all columns as TEXT datatype (through a flag maybe?)
I think the columns values are sampled and then column datatype is guessed. If there is an incompatible value in some row, then that row seems to be skipped. So instead, is it possible I load everything as TEXT data type?
I just need the data in some format(for data comparison purposes), But all csv data must go into the table without skipping rows.

Thanks
Thyag

Help with CSV parsing errors

Pandas read_csv throws an exception when encountering a line that seems to have too many fields, but it can be made to skip these bad lines and then report them on stdout if passed error_bad_lines=True. While Pandas does not make it easy to deal with these lines ( pandas-dev/pandas#5686 ) , it would be nice if csvs-to-sqlite could offer something. Maybe parsing read_csv ouput and then traversing the file and save the bad lines separately so the user can fix and reprocess them?

Need workaround for 'NULL' data

The data set I'm working with contains someone whose last name is 'Null'. This ends up being an empty value in my SQLite database. It seems like there should be a documented way to work around this.

Extracting multiple columns into the same table

I have a CSV file that defines an unnormalised table and I'd like to be able to extract groups of columns into separate tables.

I tried something of the form:

csvs-to-sqlite -c colA:newtable:colA -c colB:newtable:colB

but that only seems to pull the first column out.

I also wondered if formulations of the sort:

csvs-to-sqlite -c colA:newtable:colA colB:newtable:colB
csvs-to-sqlite -c colA:newtable:colA,colB:newtable:colB

might work (reading -c in the sense of "pull the following columns into the same table) but that just threw an error.

Is there a way to pull several columns out into the same new table?

Columns are typed "REAL" if they are integers with some NaN/blanks

This is bad. If a column has all integers and some blanks it should result in an INTEGER.

Example: this CSV https://github.com/openelections/openelections-data-ca/blob/master/2016/20161108__ca__general__yolo__precinct.csv produces this SQL table:

CREATE TABLE "2016/20161108__ca__general__yolo__precinct" (
"county" TEXT,
  "precinct" INTEGER,
  "office" INTEGER,
  "district" REAL,
  "party" REAL,
  "candidate" INTEGER,
  "votes" INTEGER
,
FOREIGN KEY (county) REFERENCES county(id),
    FOREIGN KEY (party) REFERENCES party(id),
    FOREIGN KEY (precinct) REFERENCES precinct(id),
    FOREIGN KEY (office) REFERENCES office(id),
    FOREIGN KEY (candidate) REFERENCES candidate(id))

Drop support for Python 2.x

Various dependencies (Click and Pandas) no longer support 2.x in their latest versions, which is blocking the upgrade.

Python 2.x users will still be able to use csvs-to-sqlite, they'll just have to install an older version.

I can use this as an excuse to ship 1.0.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.