Git Product home page Git Product logo

Comments (12)

lonvia avatar lonvia commented on May 25, 2024

Your test file works ok for me. Please check that your osm2pgsql is indeed compiled for 64bit ids. When you run osm2pgsql you should get something like this:

me@here:~/osm2pgsql$ ./osm2pgsql
osm2pgsql SVN version 0.83.0 (64bit id space)

If that checks out ok, could you provide the full error message and possibly a stack trace of the segfault?

from osm2pgsql.

4x4falcon avatar 4x4falcon commented on May 25, 2024

Just recompiled to confirm 64 bit

Output from command is:


osm2pgsql SVN version 0.83.0 (64bit id space)

Using projection SRS 4326 (Latlong)
NOTICE: type "stringlanguagetype" does not exist, skipping
NOTICE: type "keyvaluetype" does not exist, skipping
NOTICE: function get_connected_ways(pg_catalog.int4[]) does not exist, skipping
Allocating memory for dense node cache
Allocating dense node cache in one big chunk
Allocating memory for sparse node cache
Sharing dense sparse
Node-cache: cache=4250MB, maxblocks=544001*8192, allocation method=11
Mid: pgsql, scale=10000000 cache=4250
Setting up table: planet_osm_nodes
NOTICE: table "planet_osm_nodes" does not exist, skipping
NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index "planet_osm_nodes_pkey" for table "planet_osm_nodes"
Setting up table: planet_osm_ways
NOTICE: table "planet_osm_ways" does not exist, skipping
NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index "planet_osm_ways_pkey" for table "planet_osm_ways"
Setting up table: planet_osm_rels
NOTICE: table "planet_osm_rels" does not exist, skipping
NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index "planet_osm_rels_pkey" for table "planet_osm_rels"

Reading in file: node_0002.osm
Segmentation fault (core dumped)


How do I get a stack trace?

Cheers
Ross

from osm2pgsql.

4x4falcon avatar 4x4falcon commented on May 25, 2024

Further to this if I run the following:

./osm2pgsql -lsc -O null --hstore -C 4250 -d nominatim node_0002.osm

it completes correctly.

However with either -O gazetteer or -O pgsql, osm2pgsql seg faults.

Cheers
Ross

from osm2pgsql.

lonvia avatar lonvia commented on May 25, 2024

The IDs are too large for the dense node cache. There is an unchecked array access around here. @apmon you probably know better what the best fallback is when blocks is too small. ram_cache_nodes_get_dense() has the same problem.

You can force osm2pgsql to switch to sparse caching with the option --cache-strategy sparse as a workaround.

from osm2pgsql.

4x4falcon avatar 4x4falcon commented on May 25, 2024

Thanks I'd found that late last night.

So even though there is a work around 64bit ids will not work correctly with the osm2pgsql completely.

Should the id2block function be modified to check for the size of the id (32bit or 64bit) and return an appropriate result.

from osm2pgsql.

apmon avatar apmon commented on May 25, 2024

If I read the code correctly, then osm2pgsql supports node IDs of up to 36bit IDs with cache-strategy sparse. I.e. up to 68719476736. As we have just roled into bit 32, we are still no where close to the limit that osm2pgsql supports.

osm2pgsql should still not segfault, but erroring out and requiring --cache-strategy sparse seems reasonable to me for these cases at the moment.

One can also increase the limit by changing https://github.com/openstreetmap/osm2pgsql/blob/master/node-ram-cache.c#L65 But as it isn't currently necessary and it gobles up more virtual memory, I don't want to do it upstream.

from osm2pgsql.

4x4falcon avatar 4x4falcon commented on May 25, 2024

Ok so what needs to be changed in

#define NUM_BLOCKS (((osmid_t)1) << (36 - BLOCK_SHIFT))

To handle larger than 36 bit ID's.

Logically it would be the 36 to say 40 or 64 for 40 bit and 64 bit respectively

from osm2pgsql.

apmon avatar apmon commented on May 25, 2024

It should be. But remember you are allocating a large array in memory with NUM_BLOCKS elements.

So if you set it to 64, you are creating a memory array of 2^54 elements, as each element is 24 Bytes, you are allocating an array of ~393216TB size! I am not sure your operating system will like that very much ;-)

If you want to use the full 64bit ID space, then it is going to be very sparse, so using --cache-strategy sparse is the appropriate option.

from osm2pgsql.

4x4falcon avatar 4x4falcon commented on May 25, 2024

Understood.

I'm looking at 40bit at the moment and this was more for my own reference.

The --cache-strategy sparse works fine.

from osm2pgsql.

apmon avatar apmon commented on May 25, 2024

I am closing this request, as it is a niche application and with --cache-strategy sparse there is a good workaround for it.

from osm2pgsql.

rakeshsukla53 avatar rakeshsukla53 commented on May 25, 2024

@lonvia @4x4falcon @lonvia I am sorry for tagging you all but I am running into the exact same error and not able to resolve it. I initially tried this by reducing the cache size to 512MB and then 256MB but it is the returning the same error.

Using projection SRS 4326 (Latlong)
NOTICE: table "place" does not exist, skipping
NOTICE: type "keyvalue" does not exist, skipping
NOTICE: type "wordscore" does not exist, skipping
NOTICE: type "stringlanguagetype" does not exist, skipping
NOTICE: type "keyvaluetype" does not exist, skipping
NOTICE: function get_connected_ways(pg_catalog.int4[]) does not exist, skipping
Allocating memory for dense node cache
Out of memory for node cache dense index, try using "--cache-strategy sparse" instead
Error occurred, cleaning up
ERROR: Error executing external command: /home/nominatim/Nominatim/osm2pgsql/osm2pgsql -lsc -O gazetteer --hstore -C 2048 -P 5432 -d nominatim /home/nominatim/data/latest.osm.pbf

and then I tried to use --cache-strategy sparse as you can see below but it is not working

~/Nominatim/utils/setup.php   --osm-file data/latest.osm.pbf   --all --osm2pgsql --cache-strategy sparse 

I think I don't know how to use cache strategy sparse and I will really appreciate any help you can provide. I have also attached the error screenshot
capture8

from osm2pgsql.

pnorman avatar pnorman commented on May 25, 2024

@lonvia @4x4falcon @lonvia I am sorry for tagging you all but I am running into the exact same error and not able to resolve it. I initially tried this by reducing the cache size to 512MB and then 256MB but it is the returning the same error.

This issue is about using a custom OSM file with extra-large IDs. If you're using normal unmodified OSM data, you are encountering a different issue, so should open a new ticket.

from osm2pgsql.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.