Git Product home page Git Product logo

overpass-api's People

Contributors

amdmi3 avatar drolbr avatar harry-wood avatar lonvia avatar mmd-osm avatar samatjain avatar tyrasd avatar wiktorn avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

overpass-api's Issues

Perf: Scalability

Further scalability tests in 2017 on this branch:

Test case: [out:json];node(63174280);out; with node id continuously increasing for each request.
Up to 32 parallel processes via jmeter, runs on server (no network overhead)
Minutely updates + delta area update active.
Up to 12 FastCGI processes, managed by supervisord
Web server: nginx

grafik

grafik

grafik

grafik

grafik

grafik

grafik

grafik

grafik

Performance test 2016 : Full attic database creation in 4 days

Originally posted here: https://listes.openstreetmap.fr/wws/arc/overpass/2016-05/msg00011.html

as part of the Overpass Performance Project 2016, improving the overall time to create a full attic db was one of the primary focus topics. If you set up your own instance before, you probably used the existing clone files. That is still the recommended approach for most users.
However, when switching to a different compression algorithm or in case of bugs in the template db implementation, being able to quickly set up a full attic database from scratch is of paramount importance.

Unfortunately, there's very little documentation available on previous run times. Back in 2014, Roland set up a database for roughly 700 days, with reportedly took less than 1 week. That didn't include compression at that time. For the current v0.7.52 zlib compressed database, I
couldn't find any figures at all. Some Github ticket suggest, that the current rate of catching up using minutely diffs is about 30-fold real time.

It's about time to dig a bit deeper.

Initial tests on the dev instance quickly turned out to be quite time consuming with an estimated total runtime of at least 6 weeks. After switching to a more powerful 8 core server with 32 GB memory and SSD, initial tests on lz4 cut the time down to 13 days. Processing updates
was done using daily diffs rather than minutely diffs. That still seemed quite a lot for 1340 days (=all changes since the license change in September 2012). Large parts of the processing were CPU bound to due to fast SSDs. Nevertheless, only 1 core was used all of the time.

I decided to move dedicated parts of the database update logic to multi threaded processing (based on C++11 standard mechanism, no external libs). That affects solely those parts where 8 different files are first read from disk, decompressed, changes applied, compressed and written to disk again. Also, I reorganized the database a few times via db cloning,
mainly to cut down disk space. That brought down the full attic db setup
down to 8-8.5 days.

Next step was to increase the number of days, which are handled in one update run. So far, I used update_database, but then switched to update_from_dir and apply_osc_to_db.sh. Usually, that script is used to apply several minutely diffs in a go. Well, why not use that mechanism to apply several days at once, permitting up to 4GB of uncompressed change file? Depending on the data, this corresponds to 6-12 days worth of OSM data. Running the update this way seemed to work quite well on 32 GB main memory, although update_from_dir sometimes needed more than 20 GB. If you're short on main memory, that may not be an option.

Well, luckily, the total processing time dropped down to just 4 days,corresponding to about 330 OSM days per day. This should be good enough for the time being.

Two additional points worth noting:

  • The overall processing slows down quite a bit over time, likely caused by the increased amount of data to be processed. I didn't investigate this part any further, but 2-3 years down the road, that might need some revisiting.
  • Database size grows quite a lot during the update process. A subsequent clone-db sometimes reduced that size by 50-60GB. Again I didn't investigate where those large differences come from.

I put lots of stats on the wiki page [1]. If I find some more time, I'll probably add further comments to that page. Also, you can find the full attic db for lz4 on the dev instance [2]. The respective branch is mentioned on the wiki page as well.

Best,
mmd

[1]
https://wiki.openstreetmap.org/wiki/User:Mmd/Overpass_API/Performance_Project_2016/Full_Attic_DB_Setup
[2] http://dev.overpass-api.de/clone_lz4/

unable to setup overpass api successfully on test759 branch

@mmd-osm -
I was trying to build and deploy overpass api from your test759 branch to take benefit of all the improvements you made.
I followed the instructions #3 and changed the make configuration #6 (comment) and built code successfully.
I cloned zlib db from http://dev.overpass-api.de/clone/2020-10-20 and ran dispatcher it always fail with Unknown Index file
then i tried with cloning lz4 db http://dev.overpass-api.de/clone/2020-08-12_lz4/ and running at least dispatcher directly first before starting updates. It also did not worked.
then i tried initializing db with small data so i initialized db with luxembourg-latest.osm.bz2 and then running the conversion of gz to lz4 compression terminates by core dumped.

Will it be possible to deploy test759 branch with the instructions i am referring and where ever default instructions are applicable.

Attic db load

image

  • Days 1334 -> 3152
  • Package size: 3 days
  • osh.pbf format

Perf: Full day simulation (2017 perf test)

Intro

Full simulation on this branch, Commit: 4957911

HW & SW

Preparation

Compiling sources

  • Installing relevant libraries
    o sudo apt-get update
    o sudo apt-get install -y --force-yes --no-install-recommends g++ make expat autoconf automake autotools-dev libtool curl ca-certificates unzip
    o sudo apt-get install bzip2 libexpat1-dev zlib1g-dev liblz4-dev libfcgi-dev libevent-dev libbz2-dev libicu-dev libosmium2-dev supervisor
    o curl -o osm-3s_v0.7.58_mmd.zip https://codeload.github.com/mmd-osm/Overpass-API/zip/test758_lz4hash
    o unzip -q osm-3s_v0.7.58_mmd.zip

  • Running autotools
    o cd Overpass-API-test758_lz4hash/src
    o autoscan
    o aclocal
    o autoheader
    o libtoolize
    o automake --add-missing
    o autoconf
    o cd ..

  • Configure options
    o mkdir -p build
    o cd build
    o ../src/configure CXXFLAGS="-O2 -mtune=native -ggdb -std=c++11" LDFLAGS="-lpthread -lbz2 -levent -licuuc -licui18n" --enable-fastcgi --enable-lz4 --prefix=/srv/osm3s
    o make V=0 -j7
    o make install

Preparing database

Converting zlib database to lz4 compressed database and add tagged nodes file

  • Download clone from http://dev.overpass-api.de/clone to {{database directory zlib}}
  • Run osm3s_query --db-dir={{database directory zlib}} --clone={{new lz4 database directory}} --clone-compression=lz4 --clone-map-compression=lz4
  • Run create_tagged_nodes {{new lz4 database directory}} - this step creates two new files nodes_tagged.bin and nodes_tagged.bin.idx
  • Continue as usual with {{new lz4 database directory}} being your database directory

Configuring supervisor

Stats

Area Details
Query data source: August 30, 2017 (main instance)
Start reprocessing: 02/Sep/2017:10:15:17 +0200
End reprocessing: 02/Sep/2017:20:58:47 +0200
Total processing time: 10:43
Total number queries executed: 534497

Reponse times (quantile)

50% 90% 95% 99% 99.5% 99.9% 99.99% 99.999%
0.062s 0.685s 1.58s 5.5s 11.06s 33.92s 168.37s 853.6s

Reprocessed with 7 parallel tasks, 0s wait time.

Most expensive queries

  1. Similar to Geocoding example for a large area

Executed multiple times because of timeout issues

  1. Highway node intersection

Executed multiple times because of timeout issues

<?xml version="1.0" encoding="UTF-8"?><osm-script timeout="1800" element-limit="1073741824">
  <query type="way" into="hw">
    <has-kv k="highway"/>
    <has-kv k="highway" modv="not" regv="footway|cycleway|path|service|track"/>
    <bbox-query s="14.498508149446216" w="120.94779968261719" n="14.67061869442178" e="121.0638427734375"/>
  </query>
  
  <foreach from="hw" into="w">
    <recurse from="w" type="way-node" into="ns"/>
    <recurse from="ns" type="node-way" into="w2"/>
    <query type="way" into="w2">
      <item set="w2"/>
      <has-kv k="highway"/>
      <has-kv k="highway" modv="not" regv="footway|cycleway|path|service|track"/>
    </query>
    <difference into="wd">
      <item set="w2"/>
      <item set="w"/>
    </difference>
    <recurse from="wd" type="way-node" into="n2"/>
    <recurse from="w"  type="way-node" into="n3"/>
    <query type="node">
      <item set="n2"/>
      <item set="n3"/>
    </query>
    <print/>
  </foreach>
</osm-script>
  1. Opening hours analysis
[date:"2017-08-31T00:00:00"][out:json][timeout:4000];
area["type"="boundary"]["ISO3166-2"="DE-NW"];
foreach(
    node(area)["opening_hours"]->.t; .t out tags;
    node(area)["opening_hours:kitchen"]->.t; .t out tags;
    node(area)["opening_hours:warm_kitchen"]->.t; .t out tags;
    node(area)["happy_hours"]->.t; .t out tags;
    node(area)["delivery_hours"]->.t; .t out tags;
    node(area)["opening_hours:delivery"]->.t; .t out tags;
    node(area)["lit"]->.t; .t out tags;
    node(area)["smoking_hours"]->.t; .t out tags;
    node(area)["collection_times"]->.t; .t out tags;
    node(area)["service_times"]->.t; .t out tags;
    node(area)["fee"]->.t; .t out tags;
    way(area)["opening_hours"]->.t; .t out tags;
    way(area)["opening_hours:kitchen"]->.t; .t out tags;
    way(area)["opening_hours:warm_kitchen"]->.t; .t out tags;
    way(area)["happy_hours"]->.t; .t out tags;
    way(area)["delivery_hours"]->.t; .t out tags;
    way(area)["opening_hours:delivery"]->.t; .t out tags;
    way(area)["lit"]->.t; .t out tags;
    way(area)["smoking_hours"]->.t; .t out tags;
    way(area)["collection_times"]->.t; .t out tags;
    way(area)["service_times"]->.t; .t out tags;
    way(area)["fee"]->.t; .t out tags;
);

Also slow for ["ISO3166-2"="DE-BY"]

Code: https://github.com/opening-hours/opening_hours.js/blob/master/Makefile#L338-L356

Could use regexp instead (already implemented but inactive), also filter out [fee!=no][fee!=yes][lit!=no][lit!=yes]

Query aborts anyway as it uses too much memory: runtime error: Query run out of memory using about 2048 MB of RAM.

If query runs a few minutes after midnight, leave out [date:...] altogether, it just doesn't matter.

Follow up actions:

  1. Overpass Turbo - Map example on large bounding box

  2. Query with very large bbox

Bbox is counter productive, removing it gives faster results

[out:json][timeout:180][maxsize:1048576];(
  
  node["amenity"="compressed_air"](-80.87282721505684,-180,88.09879913729107,180);way["amenity"="compressed_air"](-80.87282721505684,-180,88.09879913729107,180);relation["amenity"="compressed_air"](-80.87282721505684,-180,88.09879913729107,180););out  center meta;>;out skel qt;
  1. Expensive Achavi style queries on large bbox
[adiff:"2017-08-06T22:39:23Z","2017-08-06T22:43:13Z"];(node(36.3181693,5.5767073,47.8357181,18.9969694)(changed);way(36.3181693,5.5767073,47.8357181,18.9969694)(changed););out meta geom(36.3181693,5.5767073,47.8357181,18.9969694);

Munin charts (perf test system on custom branch)

grafik

grafik

grafik

grafik

grafik

grafik

grafik

grafik

grafik

grafik

grafik

grafik

grafik

grafik

grafik

Munin charts (main instance)

grafik

grafik

grafik

grafik

grafik

grafik

grafik

grafik

Coord_Query_Statement::check_area_block - expensive ilat/ilon conversions

[timeout:3600][maxsize:3000000000];
area[boundary=administrative][admin_level]["de:amtlicher_gemeindeschluessel"~"^09"]->.a;
.a out count;
( way(area.a)["addr:housenumber"];
  node(area.a)["addr:housenumber"];
);
out count;
@ubuntu:~/osm-3s-patch-version/build/bin$ /usr/bin/time -v ./o3 --db-dir=db/ < ph
encoding remark: Please enter your query and terminate it with CTRL+D.
<?xml version="1.0" encoding="UTF-8"?>
<osm version="0.6" generator="Overpass API">
<note>The data included in this document is from www.openstreetmap.org. The data is made available under ODbL.</note>
<meta osm_base=""/>

  <count>
    <tag k="nodes" v="0"/>
    <tag k="ways" v="0"/>
    <tag k="relations" v="0"/>
    <tag k="areas" v="2317"/>
    <tag k="total" v="2317"/>
  </count>
  <count>
    <tag k="nodes" v="467196"/>
    <tag k="ways" v="1017886"/>
    <tag k="relations" v="0"/>
    <tag k="areas" v="0"/>
    <tag k="total" v="1485082"/>
  </count>

</osm>
	Command being timed: "./o3 --db-dir=db/"
	User time (seconds): 204.09
	System time (seconds): 0.37
	Percent of CPU this job got: 100%
	Elapsed (wall clock) time (h:mm:ss or m:ss): 3:24.46
	Average shared text size (kbytes): 0
	Average unshared data size (kbytes): 0
	Average stack size (kbytes): 0
	Average total size (kbytes): 0
	Maximum resident set size (kbytes): 784856
	Average resident set size (kbytes): 0
	Major (requiring I/O) page faults: 0
	Minor (reclaiming a frame) page faults: 168840
	Voluntary context switches: 1
	Involuntary context switches: 259
	Swaps: 0
	File system inputs: 0
	File system outputs: 0
	Socket messages sent: 0
	Socket messages received: 0
	Signals delivered: 0
	Page size (bytes): 4096
	Exit status: 0

@ubuntu:~/osm-3s-patch-version/build/bin$ /usr/bin/time -v ./osm3s_query --db-dir=db/ < ph
encoding remark: Please enter your query and terminate it with CTRL+D.
<?xml version="1.0" encoding="UTF-8"?>
<osm version="0.6" generator="Overpass API">
<note>The data included in this document is from www.openstreetmap.org. The data is made available under ODbL.</note>
<meta osm_base=""/>

  <count>
    <tag k="nodes" v="0"/>
    <tag k="ways" v="0"/>
    <tag k="relations" v="0"/>
    <tag k="areas" v="2317"/>
    <tag k="total" v="2317"/>
  </count>
  <count>
    <tag k="nodes" v="467196"/>
    <tag k="ways" v="1017886"/>
    <tag k="relations" v="0"/>
    <tag k="areas" v="0"/>
    <tag k="total" v="1485082"/>
  </count>

</osm>
	Command being timed: "./osm3s_query --db-dir=db/"
	User time (seconds): 55.80
	System time (seconds): 0.42
	Percent of CPU this job got: 100%
	Elapsed (wall clock) time (h:mm:ss or m:ss): 0:56.22
	Average shared text size (kbytes): 0
	Average unshared data size (kbytes): 0
	Average stack size (kbytes): 0
	Average total size (kbytes): 0
	Maximum resident set size (kbytes): 883220
	Average resident set size (kbytes): 0
	Major (requiring I/O) page faults: 0
	Minor (reclaiming a frame) page faults: 194466
	Voluntary context switches: 1
	Involuntary context switches: 75
	Swaps: 0
	File system inputs: 0
	File system outputs: 0
	Socket messages sent: 0
	Socket messages received: 0
	Signals delivered: 0
	Page size (bytes): 4096
	Exit status: 0
diff --git a/src/overpass_api/core/type_area.h b/src/overpass_api/core/type_area.h
index a57f29a..453e0a9 100644
--- a/src/overpass_api/core/type_area.h
+++ b/src/overpass_api/core/type_area.h
@@ -313,6 +313,8 @@ struct Area_Block
   Id_Type id;
   std::vector< uint64 > coors;
 
+  std::vector< std::pair< uint32, int32 > > ilat_ilon_pairs;
+
   Area_Block() : id(0u) {}
 
   Area_Block(void* data) : id(*(Id_Type*)data)
diff --git a/src/overpass_api/statements/area_query.cc b/src/overpass_api/statements/area_query.cc
index cd10022..90ca17f 100644
--- a/src/overpass_api/statements/area_query.cc
+++ b/src/overpass_api/statements/area_query.cc
@@ -35,6 +35,109 @@
 #include "recurse.h"
 
 
+
+namespace area_query {
+
+namespace detail {
+
+const static int HIT = 1;
+const static int TOGGLE_EAST = 2;
+const static int TOGGLE_WEST = 4;
+
+int check_area_block2
+(uint32 ll_index_ilat, int32 ll_index_ilon, const Area_Block& area_block,
+    uint32 coord_lat, int32 coord_lon)
+{
+  int state = 0;
+
+  auto it(area_block.ilat_ilon_pairs.begin());
+  uint32 lat = ll_index_ilat | it->first;                             // avoid ::ilat / ::ilon 
+  int32 lon = ll_index_ilon | it->second;                        // avoid ::ilat / ::ilon
+
+  while (++it != area_block.ilat_ilon_pairs.end())
+  {
+    uint32 last_lat = lat;
+    int32 last_lon = lon;
+
+    lat = ll_index_ilat | it->first;
+    lon = ll_index_ilon | it->second;
+
+    if (last_lon < lon)
+    {
+      if (lon < coord_lon)
+        continue; // case (1)
+      else if (last_lon > coord_lon)
+        continue; // case (1)
+      else if (lon == coord_lon)
+      {
+        if (lat < coord_lat)
+          state ^= TOGGLE_WEST; // case (4)
+        else if (lat == coord_lat)
+          return HIT; // case (2)
+        // else: case (1)
+        continue;
+      }
+      else if (last_lon == coord_lon)
+      {
+        if (last_lat < coord_lat)
+          state ^= TOGGLE_EAST; // case (4)
+        else if (last_lat == coord_lat)
+          return HIT; // case (2)
+        // else: case (1)
+        continue;
+      }
+    }
+    else if (last_lon > lon)
+    {
+      if (lon > coord_lon)
+        continue; // case (1)
+      else if (last_lon < coord_lon)
+        continue; // case (1)
+      else if (lon == coord_lon)
+      {
+        if (lat < coord_lat)
+          state ^= TOGGLE_EAST; // case (4)
+        else if (lat == coord_lat)
+          return HIT; // case (2)
+        // else: case (1)
+        continue;
+      }
+      else if (last_lon == coord_lon)
+      {
+        if (last_lat < coord_lat)
+          state ^= TOGGLE_WEST; // case (4)
+        else if (last_lat == coord_lat)
+          return HIT; // case (2)
+        // else: case (1)
+        continue;
+      }
+    }
+    else // last_lon == lon
+    {
+      if (lon == coord_lon &&
+          ((last_lat <= coord_lat && coord_lat <= lat) || (lat <= coord_lat && coord_lat <= last_lat)))
+        return HIT; // case (2)
+      continue; // else: case (1)
+    }
+
+    uint32 intersect_lat = lat +
+        ((int64)coord_lon - lon)*((int64)last_lat - lat)/((int64)last_lon - lon);
+    if (coord_lat > intersect_lat)
+      state ^= (TOGGLE_EAST | TOGGLE_WEST); // case (3)
+    else if (coord_lat == intersect_lat)
+      return HIT; // case (2)
+    // else: case (1)
+  }
+  return state;
+}
+
+
+}
+
+}
+
+
+
 class Area_Constraint : public Query_Constraint
 {
   public:
@@ -441,6 +544,10 @@ void Area_Query_Statement::collect_nodes
 
   uint32 loop_count = 0;
   uint32 current_idx(0);
+
+  uint32 current_idx_ilat(0);
+  int32 current_idx_ilon(0);
+
   while (!(area_it == area_blocks_db.discrete_end()))
   {
     current_idx = area_it.index().val();
@@ -459,6 +566,23 @@ void Area_Query_Statement::collect_nodes
       ++area_it;
     }
 
+    /* Test: Precalculate ilat/ilon pairs */
+    {
+      for (auto &it : areas)
+        for (auto& it2 : it.second)
+          for (auto coor : it2.coors)
+          {
+            uint32 _lat = ::ilat((coor>>32)&0xff, coor & 0xffffffff);
+            int32 _lon = ::ilon(((coor>>32)&0xff) ^ 0x40000000, coor & 0xffffffff); 
+            it2.ilat_ilon_pairs.push_back(std::make_pair(_lat, _lon));
+          }
+      current_idx_ilat = ::ilat(current_idx, 0);
+      current_idx_ilon = ::ilon(current_idx, 0);   // ^ 0x40000000 is only being applies once here
+    }
+
+    /* ------------------------------ */
+
+
     while (nodes_it != nodes.end() && nodes_it->first.val() < current_idx)
     {
       nodes_it->second.clear();
@@ -475,6 +599,7 @@ void Area_Query_Statement::collect_nodes
             + 91.0)*10000000+0.5);
         int32 ilon(::lon(nodes_it->first.val(), iit->ll_lower)*10000000
             + (::lon(nodes_it->first.val(), iit->ll_lower) > 0 ? 0.5 : -0.5));
+
         for (std::map< Area_Skeleton::Id_Type, std::vector< Area_Block > >::const_iterator it = areas.begin();
 	     it != areas.end(); ++it)
         {
@@ -484,7 +609,8 @@ void Area_Query_Statement::collect_nodes
           {
             ++loop_count;
 
-	    int check(Coord_Query_Statement::check_area_block(current_idx, *it2, ilat, ilon));
+	    int check(area_query::detail::check_area_block2(current_idx_ilat, current_idx_ilon, *it2, ilat, ilon));
+	    //int check(Coord_Query_Statement::check_area_block(current_idx, *it2, ilat, ilon));
 	    if (check == Coord_Query_Statement::HIT && add_border)
 	    {
 	      inside = 1;
@@ -494,10 +620,10 @@ void Area_Query_Statement::collect_nodes
 	      inside ^= check;
           }
           if (inside)
-	  {
-	    into.push_back(*iit);
-	    break;
-	  }
+          {
+            into.push_back(*iit);
+            break;
+          }
         }
       }
       nodes_it->second.swap(into);
@@ -679,6 +805,7 @@ void has_inner_points(const Area_Block& string_a, const Area_Block& string_b, in
     uint32 ilat = (coords_a[i].first + coords_a[i+1].first)/2;
     uint32 ilon = (coords_a[i].second + coords_a[i+1].second)/2 + 0x80000000u;
     int check = Coord_Query_Statement::check_area_block(0, string_b, ilat, ilon);
+
     if (check & Coord_Query_Statement::HIT)
       inside = check;
     else if (check)
@@ -708,6 +835,18 @@ void Area_Query_Statement::collect_ways
       add_way_to_area_blocks(way_geometries.get_geometry(*it2), it2->id.val(), way_segments);
   }
 
+  /* Test: Populate ilat/ilon pairs */
+  {
+    for (auto &it : way_segments)
+      for (auto& it2 : it.second)
+        for (auto coor : it2.coors)
+        {
+          uint32 _lat = ::ilat((coor>>32)&0xff, coor & 0xffffffff);
+          int32 _lon = ::ilon(((coor>>32)&0xff) ^ 0x40000000, coor & 0xffffffff);
+          it2.ilat_ilon_pairs.push_back(std::make_pair(_lat, _lon));
+        }
+  }
+
   std::map< uint32, std::vector< std::pair< uint32, Way::Id_Type > > > way_coords_to_id;
   for (typename std::map< Uint31_Index, std::vector< Way_Skeleton > >::iterator it = ways.begin(); it != ways.end(); ++it)
   {
@@ -724,6 +863,9 @@ void Area_Query_Statement::collect_ways
   // Fill node_status with the area related status of each node and segment
   uint32 loop_count = 0;
   uint32 current_idx(0);
+  uint32 current_idx_ilat(0);
+  int32 current_idx_ilon(0);
+
   while (!(area_it == area_blocks_db.discrete_end()))
   {
     current_idx = area_it.index().val();
@@ -742,6 +884,20 @@ void Area_Query_Statement::collect_ways
       ++area_it;
     }
 
+    /* Test: Populate ilat/ilon pairs */
+    {
+      for (auto &it : areas)
+        for (auto& it2 : it.second)
+          for (auto coor : it2.coors)
+          {
+            uint32 _lat = ::ilat((coor>>32)&0xff, coor & 0xffffffff);
+            int32 _lon = ::ilon(((coor>>32)&0xff) ^ 0x40000000, coor & 0xffffffff);
+            it2.ilat_ilon_pairs.push_back(std::make_pair(_lat, _lon));
+          }
+      current_idx_ilat = ::ilat(current_idx, 0);
+      current_idx_ilon = ::ilon(current_idx, 0);
+    }
+
     // check nodes
     while (nodes_it != way_coords_to_id.end() && nodes_it->first < current_idx)
       ++nodes_it;
@@ -763,7 +919,8 @@ void Area_Query_Statement::collect_ways
           {
             ++loop_count;
 
-	    int check(Coord_Query_Statement::check_area_block(current_idx, *it2, ilat, ilon));
+	    // int check(Coord_Query_Statement::check_area_block(current_idx, *it2, ilat, ilon));
+	    int check(area_query::detail::check_area_block2(current_idx_ilat, current_idx_ilon, *it2, ilat, ilon));
 	    if (check == Coord_Query_Statement::HIT)
             {
               inside = Coord_Query_Statement::HIT;

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.