Git Product home page Git Product logo

stuartemiddleton / geoparsepy Goto Github PK

View Code? Open in Web Editor NEW
54.0 54.0 4.0 148 KB

geoparsepy is a Python geoparsing library that will extract and disambiguate locations from text. It uses a local OpenStreetMap database which allows very high and unlimited geoparsing throughput, unlike approaches that use a third-party geocoding service (e.g. Google Geocoding API). this repository holds Python examples to use the PyPI library.

License: Other

Python 100.00%
artificial-intelligence geoparse information-extraction location-extraction natural-language-processing nlp openstreetmap postgresql toponym-resolution

geoparsepy's Introduction

geoparsepy project

geoparsepy is a Python geoparsing library that will extract and disambiguate locations from text. It uses a local OpenStreetMap database which allows very high and unlimited geoparsing throughput, unlike approaches that use a third-party geocoding service (e.g. Google Geocoding API).

geoparsepy PyPI

Geoparsing is based on named entity matching against OpenStreetMap (OSM) locations. All locations with names that match tokens will be selected from a target text sentence. This will result in a set of OSM locations, all with a common name or name variant, for each token in the text. Geoparsing included the following features:

  • token expansion using location name variants (i.e. OSM multi-lingual names, short names and acronyms)
  • token expansion using location type variants (e.g. street, st.)
  • token filtering of single token location names against WordNet (non-nouns), language specific stoplists and peoples first names (nltk.corpus.names.words()) to reduce false positive matches
  • prefix checking when matching in case a first name prefixes a location token(s) to avoid matching peoples full names as locations (e.g. Victoria Derbyshire != Derbyshire)

Location disambiguation is the process of choosing which of a set of possible OSM locations, all with the same name, is the best match. Location disambiguation is based on an evidential approach, with evidential features detailed below in order of importance:

  • token subsumption, rejecting smaller phrases over larger ones (e.g. 'New York' will prefer [New York, USA] to [York, UK])
  • nearby parent region, preferring locations with a parent region also appearing within a semantic distance (e.g. 'New York in USA' will prefer [New York, USA] to [New York, BO, Sierra Leone])
  • nearby locations, preferring locations with closeby or overlapping locations within a semantic distance (e.g. 'London St and Commercial Road' will select from road name choices with the same name based on spatial proximity)
  • nearby geotag, preferring locations that are closeby or overlapping a geotag
  • general before specific, rejecting locations with a higher admin level (or no admin level at all) compared to locations with a lower admin level (e.g. 'New York' will prefer [New York, USA] to [New York, BO, Sierra Leone]

Currently the following languages are supported:

  • English, French, German, Italian, Portuguese, Russian, Ukrainian
  • All other languages will work but there will be no language specific token expansion available

geoparsepy works with Python 3.7 and has been tested on Windows 10 and Ubuntu 18.04 LTS.

This geoparsing algorithm uses a large memory footprint (e.g. 12 Gbytes RAM for global cities), RAM size proportional to the number of cached locations, to maximize matching speed. It can be naively parallelized, with multiple geoparse processes loaded with different sets of locations and the geoparse results aggregated in a last process where location disambiguation is applied. This approach has been validated across an APACHE Storm cluster.

The software is copyright 2020 University of Southampton, UK. It was created over a multi-year period under EU FP7 projects TRIDEC (258723), REVEAL (610928), InnovateUK project LPLP (104875) and ESRC project FloraGuard (ES/R003254/1). This software can only be used for research, education or evaluation purposes. A free commercial license is available on request to {sem03}@soton.ac.uk. The University of Southampton is open to discussions regarding collaboration in future research projects relating to this work.

Feature suggestions and/or bug reports can be sent to {sem03}@soton.ac.uk. We do not however offer any software support beyond the examples and API documentation already provided.

Scientific publications

Middleton, S.E. Middleton, L. Modafferi, S. Real-time Crisis Mapping of Natural Disasters using Social Media, Intelligent Systems, IEEE , vol.29, no.2, pp.9,17, Mar.-Apr. 2014

Middleton, S.E. Krivcovs, V. Geoparsing and Geosemantics for Social Media: Spatio-Temporal Grounding of Content Propagating Rumours to support Trust and Veracity Analysis during Breaking News, ACM Transactions on Information Systems (TOIS), 34, 3, Article 16 (April 2016), 26 pages. DOI=10.1145/2842604

Middleton, S.E. Kordopatis-Zilos, G. Papadopoulos, S. Kompatsiaris, Y. Location Extraction from Social Media: Geoparsing, Location Disambiguation, and Geotagging, ACM Transactions on Information Systems (TOIS) 36, 4, Article 40 (June 2018), 27 pages. DOI: https://doi.org/10.1145/3202662. Presented at SIGIR 2019

A benchmark geoparse dataset is also available for free from the University of Southampton on request via email to {sem03}@soton.ac.uk.

geoparsepy documentation resources

geoparsepy API

geoparsepy example code on github

Python libs needed (earlier versions may be suitable but are untested)

Python libs: psycopg2 >= 2.8, nltk >= 3.4, numpy >= 1.18, shapely >= 1.6, setuptools >= 46, soton-corenlppy>=1.0

Database: PostgreSQL >= 11.3, PostGIS >= 2.5

For LINUX deployments the following is needed:

sudo apt-get install libgeos-dev libgeos-3.4.2 libpq-dev

You will need to download NLTK corpra before running geoparsepy:

python
import nltk
nltk.download()
==> install all or at least stopwords, names and wordnet

Installation

python3 -m pip install geoparsepy

Databases needed for geoparsing

Download pre-processed UTF-8 encoded SQL table dumps from OSM image dated dec 2019. SQL dump is a 1.2 GB tar/zip file created using pg_dump and zipped using 7Zip tool.

download zip file from Google drive https://drive.google.com/file/d/1xyCjQox6gCoN8e0upHHyeMLV-uLirthS/view?usp=sharing
unzip geoparsepy_preprocessed_tables.tar.zip
tar -xvf geoparsepy_preprocessed_tables.tar

Connect to PostgreSQL and create the database with the required PostGIS and hstore extensions

psql -U postgres
CREATE DATABASE openstreetmap;
CREATE EXTENSION IF NOT EXISTS postgis;
CREATE EXTENSION IF NOT EXISTS fuzzystrmatch;
CREATE EXTENSION IF NOT EXISTS postgis_tiger_geocoder;
CREATE EXTENSION IF NOT EXISTS hstore;

Import the precomputed database tables for global cities and places

# Linux
psql -U postgres -d openstreetmap -f global_cities.sql
psql -U postgres -d openstreetmap -f uk_places.sql
psql -U postgres -d openstreetmap -f north_america_places.sql
psql -U postgres -d openstreetmap -f europe_places.sql

# Windows 10 using powershell
& 'C:\Program Files\PostgreSQL\11\bin\psql.exe' -U postgres -d openstreetmap -f global_cities.sql
& 'C:\Program Files\PostgreSQL\11\bin\psql.exe' -U postgres -d openstreetmap -f uk_places.sql
& 'C:\Program Files\PostgreSQL\11\bin\psql.exe' -U postgres -d openstreetmap -f north_america_places.sql
& 'C:\Program Files\PostgreSQL\11\bin\psql.exe' -U postgres -d openstreetmap -f europe_places.sql

# Linux if username not sem (used for sql dump) not added as a user and some other name is needed (e.g. sem03)
find * -name \*.sql -exec sed -i "s/TO sem;/TO sem03;/g" {} \;
psql -d openstreetmap -f uk_places.sql
psql -d openstreetmap -f global_cities.sql
psql -d openstreetmap -f north_america_places.sql
psql -d openstreetmap -f europe_places.sql

Example code geoparse (start here)

Geoparse some text using the default focus areas in the Postgres database. Fully documented example PY file can be found at geoparsepy.example_geoparse.py note: loading 300,000+ global locations into memory at startup is slow (10 minutes) but subsequently the geoparsing of text is very fast (real-time speeds)

import os, sys, logging, traceback, codecs, datetime, copy, time, ast, math, re, random, shutil, json
import soton_corenlppy, geoparsepy

LOG_FORMAT = ('%(message)s')
logger = logging.getLogger( __name__ )
logging.basicConfig( level=logging.INFO, format=LOG_FORMAT )
logger.info('logging started')

dictGeospatialConfig = geoparsepy.geo_parse_lib.get_geoparse_config( 
	lang_codes = ['en'],
	logger = logger,
	whitespace = u'"\u201a\u201b\u201c\u201d()',
	sent_token_seps = ['\n','\r\n', '\f', u'\u2026'],
	punctuation = """,;\/:+-#~&*=!?""",
	)

databaseHandle = soton_corenlppy.PostgresqlHandler.PostgresqlHandler( 'postgres', 'postgres', 'localhost', 5432, 'openstreetmap', 600 )

dictLocationIDs = {}
listFocusArea=[ 'global_cities', 'europe_places', 'north_america_places', 'uk_places' ]
for strFocusArea in listFocusArea :
	dictLocationIDs[strFocusArea + '_admin'] = [-1,-1]
	dictLocationIDs[strFocusArea + '_poly'] = [-1,-1]
	dictLocationIDs[strFocusArea + '_line'] = [-1,-1]
	dictLocationIDs[strFocusArea + '_point'] = [-1,-1]

cached_locations = geoparsepy.geo_preprocess_lib.cache_preprocessed_locations( databaseHandle, dictLocationIDs, 'public', dictGeospatialConfig )
logger.info( 'number of cached locations = ' + str(len(cached_locations)) )

databaseHandle.close()

indexed_locations = geoparsepy.geo_parse_lib.calc_inverted_index( cached_locations, dictGeospatialConfig )
logger.info( 'number of indexed phrases = ' + str(len(indexed_locations.keys())) )

indexed_geoms = geoparsepy.geo_parse_lib.calc_geom_index( cached_locations )
logger.info( 'number of indexed geoms = ' + str(len(indexed_geoms.keys())) )

osmid_lookup = geoparsepy.geo_parse_lib.calc_osmid_lookup( cached_locations )

dictGeomResultsCache = {}

listText = [
	u'hello New York, USA its Bill from Bassett calling',
	u'live on the BBC Victoria Derbyshire is visiting Derbyshire for an exclusive UK interview',
	]

listTokenSets = []
listGeotags = []
for nIndex in range(len(listText)) :
	strUTF8Text = listText[ nIndex ]
	listToken = soton_corenlppy.common_parse_lib.unigram_tokenize_text( text = strUTF8Text, dict_common_config = dictGeospatialConfig )
	listTokenSets.append( listToken )
	listGeotags.append( None )

listMatchSet = geoparsepy.geo_parse_lib.geoparse_token_set( listTokenSets, indexed_locations, dictGeospatialConfig )

strGeom = 'POINT(-1.4052268 50.9369033)'
listGeotags[0] = strGeom

listMatchGeotag = geoparsepy.geo_parse_lib.reverse_geocode_geom( [strGeom], indexed_geoms, dictGeospatialConfig )
if len( listMatchGeotag[0] ) > 0  :
	for tupleOSMIDs in listMatchGeotag[0] :
		setIndexLoc = osmid_lookup[ tupleOSMIDs ]
		for nIndexLoc in setIndexLoc :
			strName = cached_locations[nIndexLoc][1]
			logger.info( 'Reverse geocoded geotag location [index ' + str(nIndexLoc) + ' osmid ' + repr(tupleOSMIDs) + '] = ' + strName )

for nIndex in range(len(listMatchSet)) :
	logger.info( 'Text = ' + listText[nIndex] )
	listMatch = listMatchSet[ nIndex ]
	strGeom = listGeotags[ nIndex ]
	setOSMID = set([])
	for tupleMatch in listMatch :
		nTokenStart = tupleMatch[0]
		nTokenEnd = tupleMatch[1]
		tuplePhrase = tupleMatch[3]
		for tupleOSMIDs in tupleMatch[2] :
			setIndexLoc = osmid_lookup[ tupleOSMIDs ]
			for nIndexLoc in setIndexLoc :
				logger.info( 'Location [index ' + str(nIndexLoc) + ' osmid ' + repr(tupleOSMIDs) + ' @ ' + str(nTokenStart) + ' : ' + str(nTokenEnd) + '] = ' + ' '.join(tuplePhrase) )
				break
	listLocMatches = geoparsepy.geo_parse_lib.create_matched_location_list( listMatch, cached_locations, osmid_lookup )
	geoparsepy.geo_parse_lib.filter_matches_by_confidence( listLocMatches, dictGeospatialConfig, geom_context = strGeom, geom_cache = dictGeomResultsCache )
	geoparsepy.geo_parse_lib.filter_matches_by_geom_area( listLocMatches, dictGeospatialConfig )
	geoparsepy.geo_parse_lib.filter_matches_by_region_of_interest( listLocMatches, [-148838, -62149], dictGeospatialConfig )
	setOSMID = set([])
	for nMatchIndex in range(len(listLocMatches)) :
		nTokenStart = listLocMatches[nMatchIndex][1]
		nTokenEnd = listLocMatches[nMatchIndex][2]
		tuplePhrase = listLocMatches[nMatchIndex][3]
		strGeom = listLocMatches[nMatchIndex][4]
		tupleOSMID = listLocMatches[nMatchIndex][5]
		dictOSMTags = listLocMatches[nMatchIndex][6]
		if not tupleOSMID in setOSMID :
			setOSMID.add( tupleOSMID )
			listNameMultilingual = geoparsepy.geo_parse_lib.calc_multilingual_osm_name_set( dictOSMTags, dictGeospatialConfig )
			strNameList = ';'.join( listNameMultilingual )
			strOSMURI = geoparsepy.geo_parse_lib.calc_OSM_uri( tupleOSMID, strGeom )
			logger.info( 'Disambiguated Location [index ' + str(nMatchIndex) + ' osmid ' + repr(tupleOSMID) + ' @ ' + str(nTokenStart) + ' : ' + str(nTokenEnd) + '] = ' + strNameList + ' : ' + strOSMURI )

Example geoparse output

logging started
loading stoplist from C:\Program Files\Python3\lib\site-packages\geoparsepy\corpus-geo-stoplist-en.txt
loading whitelist from C:\Program Files\Python3\lib\site-packages\geoparsepy\corpus-geo-whitelist.txt
loading blacklist from C:\Program Files\Python3\lib\site-packages\geoparsepy\corpus-geo-blacklist.txt
loading building types from C:\Program Files\Python3\lib\site-packages\geoparsepy\corpus-buildingtype-en.txt
loading location type corpus C:\Program Files\Python3\lib\site-packages\geoparsepy\corpus-buildingtype-en.txt
- 3 unique titles
- 76 unique types
loading street types from C:\Program Files\Python3\lib\site-packages\geoparsepy\corpus-streettype-en.txt
loading location type corpus C:\Program Files\Python3\lib\site-packages\geoparsepy\corpus-streettype-en.txt
- 15 unique titles
- 32 unique types
loading admin types from C:\Program Files\Python3\lib\site-packages\geoparsepy\corpus-admintype-en.txt
loading location type corpus C:\Program Files\Python3\lib\site-packages\geoparsepy\corpus-admintype-en.txt
- 14 unique titles
- 0 unique types
loading gazeteer from C:\Program Files\Python3\lib\site-packages\geoparsepy\gazeteer-en.txt
caching locations : {'global_cities_admin': [-1, -1], 'global_cities_poly': [-1, -1], 'global_cities_line': [-1, -1], 'global_cities_point': [-1, -1], 'europe_places_admin': [-1, -1], 'europe_places_poly': [-1, -1], 'europe_places_line': [-1, -1], 'europe_places_point': [-1, -1], 'north_america_places_admin': [-1, -1], 'north_america_places_poly': [-1, -1], 'north_america_places_line': [-1, -1], 'north_america_places_point': [-1, -1], 'uk_places_admin': [-1, -1], 'uk_places_poly': [-1, -1], 'uk_places_line': [-1, -1], 'uk_places_point': [-1, -1]}
number of cached locations = 800820
number of indexed phrases = 645697
number of indexed geoms = 657264
Reverse geocoded geotag location [index 190787 osmid (253067120,)] = Bassett
Reverse geocoded geotag location [index 779038 osmid (253067120,)] = Bassett
Text = hello New York, USA its Bill from Bassett calling
Location [index 792265 osmid (29457403,) @ 1 : 2] = new york
Location [index 737029 osmid (151937435,) @ 1 : 2] = new york
Location [index 737030 osmid (316976734,) @ 1 : 2] = new york
Location [index 140096 osmid (-175905,) @ 1 : 2] = new york
Location [index 737028 osmid (61785451,) @ 1 : 2] = new york
Location [index 792266 osmid (2218262347,) @ 1 : 2] = new york
Location [index 146732 osmid (-61320,) @ 1 : 2] = new york
Location [index 126105 osmid (-134353,) @ 2 : 2] = york
Location [index 758451 osmid (153595296,) @ 2 : 2] = york
Location [index 758454 osmid (153968758,) @ 2 : 2] = york
Location [index 114051 osmid (-1425436,) @ 2 : 2] = york
Location [index 758455 osmid (158656063,) @ 2 : 2] = york
Location [index 758452 osmid (153924230,) @ 2 : 2] = york
Location [index 758450 osmid (153473841,) @ 2 : 2] = york
Location [index 758449 osmid (151672942,) @ 2 : 2] = york
Location [index 758458 osmid (316990182,) @ 2 : 2] = york
Location [index 758448 osmid (151651405,) @ 2 : 2] = york
Location [index 800785 osmid (20913294,) @ 2 : 2] = york
Location [index 758447 osmid (151528825,) @ 2 : 2] = york
Location [index 140948 osmid (-148838,) @ 4 : 4] = usa
Location [index 190787 osmid (253067120,) @ 8 : 8] = bassett
Location [index 705552 osmid (151840681,) @ 8 : 8] = bassett
Location [index 705551 osmid (151463868,) @ 8 : 8] = bassett
Disambiguated Location [index 0 osmid (-61320,) @ 1 : 2] = New York;NY;New York State : http://www.openstreetmap.org/relation/61320
Disambiguated Location [index 3 osmid (-148838,) @ 4 : 4] = United States;US;USA;United States of America : http://www.openstreetmap.org/relation/148838
Disambiguated Location [index 5 osmid (253067120,) @ 8 : 8] =  : http://www.openstreetmap.org/node/253067120
Text = live on the BBC Victoria Derbyshire is visiting Derbyshire for an exclusive UK interview
Location [index 87080 osmid (-2316741,) @ 4 : 4] = victoria
Location [index 177879 osmid (-10307525,) @ 4 : 4] = victoria
Location [index 754399 osmid (154301948,) @ 4 : 4] = victoria
Location [index 45074 osmid (-5606595,) @ 4 : 4] = victoria
Location [index 595897 osmid (385402175,) @ 4 : 4] = victoria
Location [index 595901 osmid (462241727,) @ 4 : 4] = victoria
Location [index 754403 osmid (158651084,) @ 4 : 4] = victoria
Location [index 754358 osmid (151336948,) @ 4 : 4] = victoria
Location [index 128827 osmid (-407423,) @ 4 : 4] = victoria
Location [index 595902 osmid (463188523,) @ 4 : 4] = victoria
Location [index 595899 osmid (447925715,) @ 4 : 4] = victoria
Location [index 595898 osmid (435240340,) @ 4 : 4] = victoria
Location [index 597713 osmid (277608416,) @ 4 : 4] = victoria
Location [index 45017 osmid (-5606596,) @ 4 : 4] = victoria
Location [index 775444 osmid (30189922,) @ 4 : 4] = victoria
Location [index 87296 osmid (-2256643,) @ 4 : 4] = victoria
Location [index 754364 osmid (151395812,) @ 4 : 4] = victoria
Location [index 157847 osmid (74701108,) @ 4 : 4] = victoria
Location [index 754393 osmid (151521359,) @ 4 : 4] = victoria
Location [index 161280 osmid (75538688,) @ 4 : 4] = victoria
Location [index 595900 osmid (460070685,) @ 4 : 4] = victoria
Location [index 754369 osmid (151476805,) @ 4 : 4] = victoria
Location [index 99056 osmid (-1828436,) @ 4 : 4] = victoria
Location [index 126056 osmid (-195384,) @ 8 : 8] = derbyshire
Location [index 146796 osmid (-62149,) @ 12 : 12] = uk
Disambiguated Location [index 0 osmid (-195384,) @ 8 : 8] = Derbyshire : http://www.openstreetmap.org/relation/195384
Disambiguated Location [index 2 osmid (-62149,) @ 12 : 12] = United Kingdom;GB;GBR;UK : http://www.openstreetmap.org/relation/62149

Databases needed for preprocessing focus areas (optional)

To preprocess your own focus areas (e.g. a city with all its streets and buildings) you need a local deployment of the planet OpenStreetmapDatabase. Once a focus area is preprocessed a database table will be created for it. This can be used in the geoparse just like the 'global_cities' focus area is in the previous example. Instructions below are dated dec 2020, refer to links for more up-to-date information.

Osm2pgsql Planet.osm

# Download OpenStreetMap map data archive
- http://wiki.openstreetmap.org/wiki/Planet.osm
  + pick a mirror and download planet-latest.osm.bz2 file
  + all maps are WGS84 coord system
  + this will give you a .bz2 compressed .pbf file with the OSM dataset for the country specified
- see https://github.com/openstreetmap/osm2pgsql

# remove postgres (old versions - might not be needed if clean install)
sudo apt list --installed | grep post
sudo apt-get remove --purge postgresql-10
sudo apt-get remove --purge postgresql-10-postgis-2.4-scripts
sudo apt-get remove --purge postgis

# install using a version number (otherwise get problems later)
sudo apt-get install python3-apt
sudo apt-get install postgresql-10-postgis-2.4

# print versions
pg_config --version
psql --version

sudo /etc/init.d/postgresql stop
sudo /etc/init.d/postgresql status

sudo nano /etc/postgresql/10/main/pg_hba.conf
host    all             all             127.0.0.1/32            md5
host    all             all             127.0.0.1/32            trust

sudo nano /etc/postgresql/10/main/postgresql.conf
+ listen_addresses = '*'
+ shared_buffers = 512MB
+ work_mem = 512MB
+ maintenance_work_mem = 2GB
+ max_worker_processes = 16
+ max_parallel_workers_per_gather = 8
+ max_parallel_workers = 16
+ constraint_exclusion = partition

sudo /etc/init.d/postgresql start
sudo /etc/init.d/postgresql status

# check postgresql is running OK
sudo netstat -nlp | grep 5432
sudo cat /var/log/postgresql/postgresql-10-main.log

# make postgis database (empty initially)
sudo -u postgres createdb openstreetmap
sudo -u postgres psql -d openstreetmap -c 'CREATE EXTENSION postgis; CREATE EXTENSION hstore;'
sudo -u postgres psql -d openstreetmap -c "SELECT * FROM information_schema.tables WHERE table_schema = 'public'"

# install osm
sudo mkdir /var/lib/osm
cd /var/lib/osm
sudo wget http://ftp.snt.utwente.nl/pub/misc/openstreetmap/planet-latest.osm.bz2

# make flat node file (as planet OSM is too large otherwise for RAM)
sudo mkdir /var/lib/osm/flat-nodes
sudo chown -R postgres /var/lib/osm/flat-nodes

# run osm2pgsql (will take about 7 days to finish so redirect stderr and stdout to file and run as a deamon process)
sudo apt-get install osm2pgsql
sudo -u postgres osm2pgsql -c -d openstreetmap -P 5432 -E 4326 -S /usr/share/osm2pgsql/default.style -k -s -C 8192 --flat-nodes /var/lib/osm/flat-nodes/flat-node-index-file --number-processes 8 /var/lib/osm/planet-latest.osm.bz2 > /var/lib/osm/osm2pgsql-stdout.log 2>&1 &
sudo -u postgres psql -d openstreetmap -c "SELECT * FROM information_schema.tables WHERE table_schema = 'public'"

Example code preprocess focus area (optional)

Preprocessing new focus area tables in the Postgres database. Fully documented example PY file can be found at geoparsepy.example_preprocess_focus_area.py

import os, sys, logging, traceback, codecs, datetime, copy, time, ast, math, re, random, shutil, json
import soton_corenlppy, geoparsepy

LOG_FORMAT = ('%(message)s')
logger = logging.getLogger( __name__ )
logging.basicConfig( level=logging.INFO, format=LOG_FORMAT )
logger.info('logging started')

dictFocusAreaSpec = {
	'southampton' : {
		'focus_area_id' : 'southampton',
		'admin': ['southampton','south east england', 'united kingdom'],
		'admin_lookup_table' : 'global_cities_admin',
	}
}

dictGlobalSpec = None

dictGeospatialConfig = geoparsepy.geo_parse_lib.get_geoparse_config( 
	lang_codes = ['en'],
	logger = logger,
	whitespace = u'"\u201a\u201b\u201c\u201d()',
	sent_token_seps = ['\n','\r\n', '\f', u'\u2026'],
	punctuation = """,;\/:+-#~&*=!?""",
	)

dbHandlerPool = {}
dbHandlerPool['admin'] = soton_corenlppy.PostgresqlHandler.PostgresqlHandler( 'postgres', 'postgres', 'localhost', 5432, 'openstreetmap' )
dbHandlerPool['point'] = soton_corenlppy.PostgresqlHandler.PostgresqlHandler( 'postgres', 'postgres', 'localhost', 5432, 'openstreetmap' )
dbHandlerPool['poly'] = soton_corenlppy.PostgresqlHandler.PostgresqlHandler( 'postgres', 'postgres', 'localhost', 5432, 'openstreetmap' )
dbHandlerPool['line'] = soton_corenlppy.PostgresqlHandler.PostgresqlHandler( 'postgres', 'postgres', 'localhost', 5432, 'openstreetmap' )

for strFocusArea in dictFocusAreaSpec.keys() :
	logger.info( 'starting focus area ' + strFocusArea )
	jsonFocusArea = dictFocusAreaSpec[strFocusArea]
	geoparsepy.geo_preprocess_lib.create_preprocessing_tables( jsonFocusArea, dbHandlerPool['admin'], 'public', delete_contents = False, logger = logger )
	dictNewLocations = geoparsepy.geo_preprocess_lib.execute_preprocessing_focus_area( jsonFocusArea, dbHandlerPool, 'public', logger = logger )
	logger.info( 'finished focus area ' + strFocusArea )
	logger.info( 'location id range : ' + repr(dictNewLocations) )

dbHandlerPool['admin'].close()
dbHandlerPool['point'].close()
dbHandlerPool['poly'].close()
dbHandlerPool['line'].close()

Example code preprocess focus area output (optional)

logging started
loading stoplist from /home/sem/.local/lib/python3.7/site-packages/geoparsepy/corpus-geo-stoplist-en.txt
loading whitelist from /home/sem/.local/lib/python3.7/site-packages/geoparsepy/corpus-geo-whitelist.txt
loading blacklist from /home/sem/.local/lib/python3.7/site-packages/geoparsepy/corpus-geo-blacklist.txt
loading building types from /home/sem/.local/lib/python3.7/site-packages/geoparsepy/corpus-buildingtype-en.txt
loading location type corpus /home/sem/.local/lib/python3.7/site-packages/geoparsepy/corpus-buildingtype-en.txt
- 3 unique titles
- 76 unique types
loading street types from /home/sem/.local/lib/python3.7/site-packages/geoparsepy/corpus-streettype-en.txt
loading location type corpus /home/sem/.local/lib/python3.7/site-packages/geoparsepy/corpus-streettype-en.txt
- 15 unique titles
- 32 unique types
loading admin types from /home/sem/.local/lib/python3.7/site-packages/geoparsepy/corpus-admintype-en.txt
loading location type corpus /home/sem/.local/lib/python3.7/site-packages/geoparsepy/corpus-admintype-en.txt
- 14 unique titles
- 0 unique types
loading gazeteer from /home/sem/.local/lib/python3.7/site-packages/geoparsepy/gazeteer-en.txt
starting focus area southampton
starting preprocessing of new focus area : {'focus_area_id': 'southampton', 'admin': ['southampton', 'south east england', 'united kingdom'], 'admin_lookup_table': 'global_cities_admin'}
starting SQL threads
start SQL (point x 2)   .
start SQL (line x 2)   .
start SQL (poly x 2)   .
start SQL (admin x 2)   .
waiting for joins
  . end SQL (admin x 2)   .
  . end SQL (point x 2)   .
  . end SQL (line x 2)   .
  . end SQL (poly x 2)   .
join successful
finished focus area southampton
location id range : {'southampton_point': (1, 1327), 'southampton_line': (1, 2144), 'southampton_poly': (1, 2748), 'southampton_admin': (1, 7)}

geoparsepy's People

Contributors

stuartemiddleton avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

geoparsepy's Issues

Geoparsing don't match some city

I tried to geoparse some phrases but not all the city are matched (for example: 'Sciacca' and 'Asciano').
Note that all the city are present on the database and all the phrases are correctly tokenized.

EDIT: I noticed that if I manually whitelist the cities everything works fine, but why are they not shown directly?

Here is my code:

import soton_corenlppy
import geoparsepy
import logging

logger = logging.getLogger("geoparsepy")
logging.basicConfig(level=logging.INFO, format="%(message)s")
logger.info('Logging started')

geospatial_config = geoparsepy.geo_parse_lib.get_geoparse_config(
    lang_codes=['it', 'en'],
    logger=logger
)

location_ids = {}
focus_areas = ['global_cities', 'europe_places', 'north_america_places', 'uk_places']
for focus_area in focus_areas:
    location_ids[focus_area + '_admin'] = [-1, -1]
    location_ids[focus_area + '_poly'] = [-1, -1]
    location_ids[focus_area + '_line'] = [-1, -1]
    location_ids[focus_area + '_point'] = [-1, -1]


# Create a connection with the database
database_handler = soton_corenlppy.PostgresqlHandler.PostgresqlHandler(
    user='postgres',
    passw=' ',
    hostname='localhost',
    port=5432,
    database='openstreetmap'
)

# Load a set of previously preprocessed locations from database
cached_locations = geoparsepy.geo_preprocess_lib.cache_preprocessed_locations(
    database_handle=database_handler,
    location_ids=location_ids,
    schema='public',
    geospatial_config=geospatial_config
)
logger.info(f"Loaded {len(cached_locations)} position")

# Close connection with the database
database_handler.close()


# Compile an inverted index from a list of arbirary data where one column is a phrase string
indexed_locations = geoparsepy.geo_parse_lib.calc_inverted_index(
    list_data=cached_locations,
    dict_geospatial_config=geospatial_config
)
logger.info(f"Indexed {len(indexed_locations.keys())} phrases")

# Create an index of osmid to row indexes in the cached_locations
osmid_lookup = geoparsepy.geo_parse_lib.calc_osmid_lookup(cached_locations=cached_locations)


listText = [
    u'hello New York, USA its Bill from Bassett calling',
    u'live on the BBC Victoria Derbyshire is visiting Derbyshire for an exclusive UK interview',
    u'Domani vado a Roma, nel Lazio',
    u'Io sono di Sciacca, in provincia di agrigento',
    u'Vengo dalla provincia di Agrigento, in Sicilia',
    u'Mi sdraio sul prato del mio vicino',
    u'Pavia e Ravenna sono belle città',
    u'Voglio andare a new york',
    u'Mi trovo a San Giuliano Terme',
    u'Io sono di Sciacca, in provincia di Agrigento',
    u'Martina vive a Nuoro ma vorrebbe andare ad Agrigento',
    u'Agrigento è la provincia che contiene il comune di Sciacca',
    u'Vicino san giuliano terme c\'è un comune che si chiama Asciano',
    u'La città di Sciacca si trova in provincia di Agrigento',
    u'Mi trovo a Sciacca'
]

listTokenSets = []
for text in listText:
    # Tokenize a text entry into unigram tokens text will be cleaned and tokenize
    listToken = soton_corenlppy.common_parse_lib.unigram_tokenize_text(
        text=text,
        dict_common_config=geospatial_config
    )
    listTokenSets.append(listToken)


# Geoparse token sets using a set of cached locations
listMatchSet = geoparsepy.geo_parse_lib.geoparse_token_set(
    token_set=listTokenSets,
    dict_inverted_index=indexed_locations,
    dict_geospatial_config=geospatial_config
)

# Print the matched location
for i in range(len(listMatchSet)):
    logger.info(f"\nText: {listText[i]}")
    listMatch = listMatchSet[i]
    for tupleMatch in listMatch:
        logger.info(str(tupleMatch))

The output is the following:

C:\Users\calog\PycharmProjects\geoparsepy\venv\Scripts\python.exe C:/Users/calog/PycharmProjects/geoparsepy/main2.py
Logging started
loading stoplist from C:\Users\calog\PycharmProjects\geoparsepy\venv\lib\site-packages\geoparsepy\corpus-geo-stoplist-it.txt
loading stoplist from C:\Users\calog\PycharmProjects\geoparsepy\venv\lib\site-packages\geoparsepy\corpus-geo-stoplist-en.txt
loading whitelist from C:\Users\calog\PycharmProjects\geoparsepy\venv\lib\site-packages\geoparsepy\corpus-geo-whitelist.txt
loading blacklist from C:\Users\calog\PycharmProjects\geoparsepy\venv\lib\site-packages\geoparsepy\corpus-geo-blacklist.txt
loading building types from C:\Users\calog\PycharmProjects\geoparsepy\venv\lib\site-packages\geoparsepy\corpus-buildingtype-it.txt
loading location type corpus C:\Users\calog\PycharmProjects\geoparsepy\venv\lib\site-packages\geoparsepy\corpus-buildingtype-it.txt
- 0 unique titles
- 61 unique types
loading street types from C:\Users\calog\PycharmProjects\geoparsepy\venv\lib\site-packages\geoparsepy\corpus-streettype-it.txt
loading location type corpus C:\Users\calog\PycharmProjects\geoparsepy\venv\lib\site-packages\geoparsepy\corpus-streettype-it.txt
- 10 unique titles
- 14 unique types
loading admin types from C:\Users\calog\PycharmProjects\geoparsepy\venv\lib\site-packages\geoparsepy\corpus-admintype-it.txt
loading location type corpus C:\Users\calog\PycharmProjects\geoparsepy\venv\lib\site-packages\geoparsepy\corpus-admintype-it.txt
- 10 unique titles
- 0 unique types
loading building types from C:\Users\calog\PycharmProjects\geoparsepy\venv\lib\site-packages\geoparsepy\corpus-buildingtype-en.txt
loading location type corpus C:\Users\calog\PycharmProjects\geoparsepy\venv\lib\site-packages\geoparsepy\corpus-buildingtype-en.txt
- 3 unique titles
- 76 unique types
loading street types from C:\Users\calog\PycharmProjects\geoparsepy\venv\lib\site-packages\geoparsepy\corpus-streettype-en.txt
loading location type corpus C:\Users\calog\PycharmProjects\geoparsepy\venv\lib\site-packages\geoparsepy\corpus-streettype-en.txt
- 15 unique titles
- 32 unique types
loading admin types from C:\Users\calog\PycharmProjects\geoparsepy\venv\lib\site-packages\geoparsepy\corpus-admintype-en.txt
loading location type corpus C:\Users\calog\PycharmProjects\geoparsepy\venv\lib\site-packages\geoparsepy\corpus-admintype-en.txt
- 14 unique titles
- 0 unique types
loading gazeteer from C:\Users\calog\PycharmProjects\geoparsepy\venv\lib\site-packages\geoparsepy\gazeteer-en.txt
caching locations : {'global_cities_admin': [-1, -1], 'global_cities_poly': [-1, -1], 'global_cities_line': [-1, -1], 'global_cities_point': [-1, -1], 'europe_places_admin': [-1, -1], 'europe_places_poly': [-1, -1], 'europe_places_line': [-1, -1], 'europe_places_point': [-1, -1], 'north_america_places_admin': [-1, -1], 'north_america_places_poly': [-1, -1], 'north_america_places_line': [-1, -1], 'north_america_places_point': [-1, -1], 'uk_places_admin': [-1, -1], 'uk_places_poly': [-1, -1], 'uk_places_line': [-1, -1], 'uk_places_point': [-1, -1]}
Loaded 800820 position
Indexed 605884 phrases

Text: hello New York, USA its Bill from Bassett calling
(1, 2, {(61785451,), (-175905,), (151937435,), (316976734,), (2218262347,), (29457403,), (-61320,)}, ('new', 'york'))
(2, 2, {(153924230,), (151528825,), (158656063,), (20913294,), (151672942,), (153595296,), (153968758,), (316990182,), (151651405,), (-134353,), (-1425436,), (153473841,)}, ('york',))
(4, 4, {(-148838,)}, ('usa',))
(8, 8, {(253067120,), (151840681,), (151463868,)}, ('bassett',))

Text: live on the BBC Victoria Derbyshire is visiting Derbyshire for an exclusive UK interview
(4, 4, {(75538688,), (385402175,), (151521359,), (74701108,), (-5606595,), (462241727,), (151395812,), (460070685,), (447925715,), (277608416,), (-1828436,), (-407423,), (154301948,), (-2316741,), (435240340,), (-5606596,), (463188523,), (151336948,), (151476805,), (30189922,), (158651084,), (-2256643,), (-10307525,)}, ('victoria',))
(8, 8, {(-195384,)}, ('derbyshire',))
(12, 12, {(-62149,)}, ('uk',))

Text: Domani vado a Roma, nel Lazio
(1, 1, {(151686158,)}, ('vado',))
(3, 3, {(385056116,), (-41313,)}, ('roma',))
(6, 6, {(-40784,)}, ('lazio',))

Text: Io sono di Sciacca, in provincia di agrigento

Text: Vengo dalla provincia di Agrigento, in Sicilia
(7, 7, {(-39152,)}, ('sicilia',))

Text: Mi sdraio sul prato del mio vicino
(3, 3, {(-42619,)}, ('prato',))

Text: Pavia e Ravenna sono belle città
(0, 0, {(158289705,), (-43483,), (230101550,)}, ('pavia',))
(2, 2, {(154313500,), (151333458,), (151866924,), (154149873,), (-42889,)}, ('ravenna',))
(4, 4, {(154337430,)}, ('belle',))

Text: Voglio andare a new york
(3, 4, {(61785451,), (-175905,), (151937435,), (316976734,), (2218262347,), (29457403,), (-61320,)}, ('new', 'york'))
(4, 4, {(153924230,), (151528825,), (158656063,), (20913294,), (151672942,), (153595296,), (153968758,), (316990182,), (151651405,), (-134353,), (-1425436,), (153473841,)}, ('york',))

Text: Mi trovo a San Giuliano Terme
(1, 1, {(62515792,)}, ('trovo',))
(3, 4, {(4594763552,), (130871200,), (6986638289,), (6008076012,), (3653962105,), (1213463381,), (5318245098,), (2815922128,)}, ('san', 'giuliano'))
(3, 5, {(258512997,)}, ('san', 'giuliano', 'terme'))
(5, 5, {(27013444,), (-1837372,)}, ('terme',))

Text: Io sono di Sciacca, in provincia di Agrigento

Text: Martina vive a Nuoro ma vorrebbe andare ad Agrigento
(3, 3, {(-39979,)}, ('nuoro',))
(8, 8, {(-39151,)}, ('agrigento',))

Text: Agrigento è la provincia che contiene il comune di Sciacca
(0, 0, {(-39151,)}, ('agrigento',))

Text: Vicino san giuliano terme c'è un comune che si chiama Asciano
(1, 2, {(4594763552,), (130871200,), (6986638289,), (6008076012,), (3653962105,), (1213463381,), (5318245098,), (2815922128,)}, ('san', 'giuliano'))
(1, 3, {(258512997,)}, ('san', 'giuliano', 'terme'))
(3, 3, {(27013444,), (-1837372,)}, ('terme',))

Text: La città di Sciacca si trova in provincia di Agrigento

Text: Mi trovo a Sciacca
(1, 1, {(62515792,)}, ('trovo',))

Running Geoparsepy with other languages

Currently the following languages are supported:

English, French, German, Italian, Portuguese, Russian, Ukrainian
All other languages will work but there will be no language specific token expansion available

Ive followed the instructions and gotten geoparsepy working with the example.

I tried adding a sentence to your listText: u'Hola, vivo en Madrid España' but its not finding anything. The location of "Madrid España" should be pretty easy to find as its a direct lookup.

Do you have any advice on how to approach handling other languages?

Broken encoding for cyrillic rows

You write: "Download pre-processed UTF-8 encoded SQL table dumps from OSM image dated dec 2019"
But database which generated dumps was English_United States.1252 encoding
Because of that we have invalid data in sql files :(
Screen Shot 2023-12-30 at 1 25 51 PM

There are some rows without localization:
Screen Shot 2023-12-30 at 1 45 32 PM

Do I need to Osm2pgsql? Or there is some other solution?

Reducing false positives

This is not really an issue, but I am writing this to start a conversation and ask a few questions. First of all thanks a lot for making this public, it is a very useful tool.

Sometimes geoparsepy returns many false positives, which makes it difficult to filter out the locations that we are interested in.

For example, take the sentence "They are a ga machine". Just a random sentence, no meaning. "ga" is picked up as a possible location, and matched to a number of possible osmid.

How can we filter out such false positives? I noticed that very often this happens with short words, of 2-3 letters. A very rough approach would be to filter out all short words that returned a match, but I am sure something more nuanced is possible.

Thanks again

Missing data for columan "tags"

Dear Stuart,

When I restore the table of uk_places using your provided "uk_places.sql" file, I encountered the error "missing data for column "tags" CONTEXT: COPY uk_places_point, line 1: "1 Aaron's Hill {718482994} {-151304,-62149,-58447,-57582} 0101000020E6100000C737CAB0402AE4BFC8A9E7EE..."". I checked the content of sql file and suspected that there are some empty values for the column tags, e.g., the first row, Aaron's Hill.

I am wondering if there is a solution to resolve it? E.g., using '' to fill the empty values?

Thanks in avance.

error in reading postgresql database

I am trying to replicate the example as explained in the docs.

However, I am having issues in extracting all files from geoparsepy_preprocessed_tables.tar.gz (using 7zip on windows): I can only extract global_cities.sql and europe_places.sql.

I then create the PostgreSQL database following the example, and launch the python example file. The script fails here:

cached_locations = geoparsepy.geo_preprocess_lib.cache_preprocessed_locations( databaseHandle, dictLocationIDs, 'public', dictGeospatialConfig )

This is the traceback:

logging started
loading stoplist from C:\Users\**\AppData\Local\Programs\Python\Python37\lib\site-packages\geoparsepy\corpus-geo-stoplist-en.txt
loading whitelist from C:\Users\**\AppData\Local\Programs\Python\Python37\lib\site-packages\geoparsepy\corpus-geo-whitelist.txt
loading blacklist from C:\Users\**\AppData\Local\Programs\Python\Python37\lib\site-packages\geoparsepy\corpus-geo-blacklist.txt
loading building types from C:\Users\**\AppData\Local\Programs\Python\Python37\lib\site-packages\geoparsepy\corpus-buildingtype-en.txt
loading location type corpus C:\Users\**\AppData\Local\Programs\Python\Python37\lib\site-packages\geoparsepy\corpus-buildingtype-en.txt
- 3 unique titles
- 76 unique types
loading street types from C:\Users\**\AppData\Local\Programs\Python\Python37\lib\site-packages\geoparsepy\corpus-streettype-en.txt
loading location type corpus C:\Users\**\AppData\Local\Programs\Python\Python37\lib\site-packages\geoparsepy\corpus-streettype-en.txt
- 15 unique titles
- 32 unique types
loading admin types from C:\Users\**\AppData\Local\Programs\Python\Python37\lib\site-packages\geoparsepy\corpus-admintype-en.txt
loading location type corpus C:\Users\**\AppData\Local\Programs\Python\Python37\lib\site-packages\geoparsepy\corpus-admintype-en.txt
- 14 unique titles
- 0 unique types
loading gazeteer from C:\Users\**\AppData\Local\Programs\Python\Python37\lib\site-packages\geoparsepy\gazeteer-en.txt
caching locations : {'europe_places_admin': [-1, -1], 'europe_places_poly': [-1, -1], 'europe_places_line': [-1, -1], 'europe_places_point': [-1, -1], 'global_cities_admin': [-1, -1], 'global_cities_poly': [-1, -1], 'global_cities_line': [-1, -1], 'global_cities_point': [-1, -1]}
Traceback (most recent call last):
  File "C:\Users\**\20200819_geoparsepy_example.py", line 29, in <module>
    cached_locations = geoparsepy.geo_preprocess_lib.cache_preprocessed_locations( databaseHandle, dictLocationIDs, 'public', dictGeospatialConfig )
  File "C:\Users\**\AppData\Local\Programs\Python\Python37\lib\site-packages\geoparsepy\geo_preprocess_lib.py", line 1649, in cache_preprocessed_locations
    listRows = database_handle.execute_sql_query_batch( listSQL, timeout_statement, timeout_overall )
  File "C:\Users\**\AppData\Local\Programs\Python\Python37\lib\site-packages\soton_corenlppy\PostgresqlHandler.py", line 348, in execute_sql_query_batch
    raise Exception( 'SQL query failed (timeout retrying) : ' + strLastError + ' : ' + tupleStatement[0] )
Exception: SQL query failed (timeout retrying) : ['42P01'] UndefinedTable('relation "public.europe_places_admin" does not exist\nLINE 1: ...gions,ST_AsText(geom),hstore_to_matrix(tags) FROM public.eur...\n                                                             ^\n') : SELECT concat('europe_places_admin_',loc_id),name,osm_id_set,admin_regions,ST_AsText(geom),hstore_to_matrix(tags) FROM public.europe_places_admin

Database setup fails

Hi,

I started several attempts to setup the database required for geoparsepy. However I failed to do so on several devices and environments. I tried it on macOS locally and with Docker, same on Windows. I am not 100 % certain but I think the encoding of the sql Files is somehow messed up. Also I am struggling to setup the collation from the sql-Files.

Can anyone confirm that the dumps actually work and maybe give some advice?

Thanks in advance.

possible installation problem with data files encoding

There is a possible problem with either database dump files (encoding?) or installation instructions.
(sorry if it's better suited to troubleshooting questions than issues, I didn't find another way to communicate)

Attempt to import database files

psql -h localhost -U postgres -d openstreetmap -f global_cities.sql

leads to many syntax errors like

psql:global_cities.sql:81540: ERROR: syntax error at or near "章丘"
LINE 1: 章丘", "wikidata"=>"Q197392", "wikipedia"=>"en:Zhangqiu Di...

Installation/environment notes:

$ cat /etc/issue
Ubuntu 18.04.5 LTS \n \l
$ echo LANG
en_US.UTF-8

Since the dump files were created with postgres 11.3, it was installed with the following Docker file

FROM postgres:11.3

RUN apt-get update \
    && apt-get install wget -y \
    && apt-get install postgresql-11-postgis-3 -y \
    && apt-get install postgresql-11-postgis-3-scripts -y \
    && apt-get install postgis -y

COPY ./db.sql /docker-entrypoint-initdb.d/

# docker run --name pg-docker -e POSTGRES_PASSWORD=docker -d -p 5432:5432 -v $HOME/docker/volumes/postgres:/var/lib/postgresql/data postres_posgis11:latest
#psql -h localhost -U postgres -d openstreetmap -f global_cities.sql
#psql -h localhost -U postgres -d openstreetmap -f uk_places.sql
#psql -h localhost -U postgres -d openstreetmap -f north_america_places.sql
#psql -h localhost -U postgres -d openstreetmap -f europe_places.sql

(base) olga@anyclt104:~/workspace/edison/geoparsy_test/external/db$ cat db.sql
CREATE DATABASE openstreetmap;
CREATE EXTENSION IF NOT EXISTS postgis;
CREATE EXTENSION IF NOT EXISTS fuzzystrmatch;
CREATE EXTENSION IF NOT EXISTS postgis_tiger_geocoder;
CREATE EXTENSION IF NOT EXISTS hstore;

Minor issues getting getting the .sql files to load correctly

There are a couple issues with BOM and the username. I was using a docker image to host the postgres, and ran this to pre-process the .sql files and it worked correctly after that.

Replace postgres with your username of choice.

fixed_dir=./data/fixed
mkdir -p $fixed_dir
sed '1s/^\xEF\xBB\xBF//' < ./data/global_cities.sql > ${fixed_dir}/global_cities.sql
sed '1s/^\xEF\xBB\xBF//' < ./data/uk_places.sql > ${fixed_dir}/uk_places.sql
sed '1s/^\xEF\xBB\xBF//' < ./data/north_america_places.sql > ${fixed_dir}/north_america_places.sql
sed '1s/^\xEF\xBB\xBF//' < ./data/europe_places.sql > ${fixed_dir}/europe_places.sql

sed 's/TO sem/TO postgres/' -i ${fixed_dir}/global_cities.sql
sed 's/TO sem/TO postgres/' -i ${fixed_dir}/uk_places.sql
sed 's/TO sem/TO postgres/' -i ${fixed_dir}/north_america_places.sql
sed 's/TO sem/TO postgres/' -i ${fixed_dir}/europe_places.sql

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.