Git Product home page Git Product logo

greyhound's People

Contributors

connormanning avatar gadomski avatar gbivins avatar hobu avatar verma avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

greyhound's Issues

API entry to list all pipelines

There seems to be no API call to list available pipelines.

How can get a list of pipelines served by greyhound, and metadata on them ? Any other non-API solution ?

An http call would be convenient, especially if it returns the pipeline coverage.

put-pipeline hangs forever

Steps to reproduce :

vagrant up
vagrant ssh
vagrant@greyhound-dev:/vagrant/examples/data$ ../cpp/put-pipeline  read.xml

There it hangs forever. No error message, no information displayed. Same with AZ stadium.

Error: no such file or directory when using config.json file

Hello,

I am received the error below when trying to use a config.json file using the command "sudo docker run -it -p 8088:8080 connormanning/greyhound -c /path/to/config/file". What do you think could be the problem? Thanks!

LOG Using config at /path/to/config/file
fs.js:549
return binding.open(pathModule._makeLong(path), stringToFlags(flags), mode);
^

Error: ENOENT: no such file or directory, open '/path/to/config/file'
at Error (native)
at Object.fs.openSync (fs.js:549:18)
at Object.fs.readFileSync (fs.js:397:15)
at Object. (/usr/lib/node_modules/greyhound-server/src/app.js:26:35)
at Module._compile (module.js:409:26)
at Object.Module._extensions..js (module.js:416:10)
at Module.load (module.js:343:32)
at Function.Module._load (module.js:300:12)
at Function.Module.runMain (module.js:441:10)
at startup (node.js:140:18)
at node.js:1043:3
Relaunching greyhound

fatal error C1083: Cannot open include file: 'json/json.h': No such file or directory [C:\Users\user\Documents\Git\fork\greyhound\build\session.vcxproj

Hello,

Trying to get up and running with greyhound and running into this. on the surface it looked like the much dreaded nodegyp rebuild problem that everyone just *hates* on windows... but this looks a little different. citing a missing file.

fatal error C1083: Cannot open include file: 'json/json.h': No such file or directory [C:\Users\user\Documents\Git\fork\greyhound\build\session.vcxproj]

Not exactly sure where that json folder is supposed to live.

The combination of windows and nodegyp always seems touch and go, so not sure what's driving this issue.

Gist of the full console output
https://gist.github.com/eric-schleicher/645bdd1526c8aa9bbb5f0091cfe2cc1b

and a gist of the npm-debug.log
https://gist.github.com/eric-schleicher/895c9e5df272b0fa6281f28cc28c1dc4

let me know if it's something silly or simple. This is from a freshly cloned repo [30bba46]. Other project on this system with nodegyp dependencies npm install just fine.

Data staging

It would be useful for Greyhound to support data staging operations. For example, a big query returns ~20 million points. If the server supported staging (user configurable), the results of that query could be posted there and the URL returned to the caller (&stage=true or something).

Alternative compression requests

We just have uncompressed and lazperf at the moment. It would be useful to allow alternatives like deflate to be requested for clients that couldn't speak the more efficient lazperf.

Endianness of binary data count incorrect

The client documentation refers to the 32bit unsigned integer at the end of the point-cloud data as network byte order

These may be parsed as a 32-bit unsigned integer, transmitted in network byte order.

Which is defined as being Big-Endian [https://www.ibm.com/support/knowledgecenter/en/SSB27U_6.4.0/com.ibm.zvm.v640.kiml0/asonetw.htm](IBM network byte order article)

However it seems that the server is returning the data in Little-Endian format (when running on an Intel architecture)

EG for a response with 1456 entries the last 4 bytes of the response are:
0xb0050000
Rather than
0x000005b0

PdalBindings makes an unnecessary copy

The 2-argument version of node::Buffer::new makes a copy. We should use the 4-argument version with an explicit cleanup function to avoid the (potentially very large) copy/reallocation.

Add /serverinfo call

It would be useful to have a /serverinfo call:

  • git SHA
  • version
  • PDAL version info
  • active memory consumption
  • list of resources
  • is append support enabled

Return schema with Read results

An invalid PDAL Dimension name will be silently omitted from the binary results of a Read command. Currently, clients should base their requested schemas on the results of the Schema command, however it would be useful to provide the ability for a client to use a static schema with all the Dimensions they recognize/care about.

E.g. a rendering client could always send XYZRGBI, and a missing 'Intensity' Dimension would be silently omitted in the binary results.

Serialization

We shouldn't need to store every active pipeline's PointBuffer and KD/Quad index in RAM. Allow serialization to disk.

Reservation failure

Hello @connormanning. Could you please guide me through what could be this issue? Thanks! I get the following error at some point during serving a 300M point cloud.

Exception in pool task: std::bad_alloc
16:07:02:11 LOG Error handling: { code: 400, message: 'Reservation failure' }

Adaptive coordinate quantization

@LAStools pointed out that we don't need to give full coordinate precision to the client if it doesn't need it. The benefit of not doing that is the compression could be much higher for data that has 0.1 vs 0.01 coordinate precision.

The idea is the client would ask for a quantization level as part of the dimension request, and when the chunks are compressed, they are quantized to that.

Authentication Information

Hello @connormanning, could you please give more details about how to use the authentication functionality of greyhound? I am trying to add this feature in an EC2 instance. Thanks!

Document configuration

Need to document configuration options - i.e. config.js and the frontend-proxy configs.

Greyhound stops responding

After upgrading Greyhound to be able to use the nativeBounds in queries we are now seeing issues where Greyhound will work for a period of time and then just stop responding. The docker container is still there and running but Greyhound isn't doing anything. Checking the logs of the container after it has been stopped and there is no error message.

If I kill the container and restart it, all is fine again for a period of time and then it stops. This goes for PoTree as well as my own application.

To test for it being alive I have tried issuing an info command on a resource (I assume these are quite light weight? unless there is a server status type request I could make?) and the TCP-IP connection establishes but I get nothing back.

I don't have a timescale for this happening alas, it's quite intermittent that Greyhound is called on and so far we have not needed to monitor it for responsiveness.

Document READ decision tree

The decision tree for determining the type of read needs to be documented. Currently a user needs to look at the actual code in session-handler/app.js to figure it out.

Cannot enlarge memory arrays.

Hello @connormanning. With a new data, I am getting the following error in the browser coming from the greyhoundbinarydecoderworker.js. Any ideas what could it be or how to solve it? Thanks!

"Cannot enlarge memory arrays. Either (1) compile with -s TOTAL_MEMORY=X with X higher than the current value 117440512, (2) compile with ALLOW_MEMORY_GROWTH which adjusts the size at runtime but prevents some optimizations, or (3) set Module.TOTAL_MEMORY before the program runs."

Ram never released with docker

Hi @connormanning

I have a problem using greyhound docker.

Tested with version :
-latest
-1.1.1
-1.1
-1.0

I'm running the container using this :

docker run -it -p 8080:8080 -v /home/me/greyhound.conf:$HOME/greyhound.conf -v /home/me/entwine:/entwine connormanning/greyhound:1.1.1 -c $HOME/greyhound.conf.

My configuration file is :
{
"cacheSize": "8 GB",
"paths": ["/entwine"],
"resourceTimeoutMinutes": 30,
"http": {
"port": 8080,
"headers": {
"Cache-Control": "public, max-age=300",
"Access-Control-Allow-Origin": "*",
"Access-Control-Allow-Methods": "GET,PUT,POST,DELETE"
}
}
}

When i ask for data from my browser ram never be released.

At fresh start :

$ free -m
total used free shared buff/cache available
Mem: 16091 193 9515 112 6382 15540
Swap: 510 164 346

$ docker stats <container_id>

CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
41fbdc424b94 gifted_goldberg 0.00% 8.234MiB / 15.71GiB 0.05% 618B / 0B 0B / 0B

After i just move in 3D view in potree and i have this :

$ free -m
total used free shared buff/cache available
Mem: 16091 1500 8199 112 6390 14232
Swap: 510 163 347

$ docker stats <container_id>

CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
41fbdc424b94 gifted_goldberg 0.00% 1.284GiB / 15.71GiB 8.17% 1.87MB / 63.4MB 7.39MB / 0B 9

When i continue to moved in potree the ram just increase.

CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
41fbdc424b94 gifted_goldberg 9.18% 4.042GiB / 15.71GiB 25.72% 4.91MB / 165MB 89.7MB / 0B 9

I have tried to change cacheSize to 512 MB. it doesn't change anything.

Have you got a solution to this ?

Thanks in advance.

##EDIT

Here my docker info command.

Containers: 1
Running: 1
Paused: 0
Stopped: 0
Images: 2
Server Version: 17.12.0-ce
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 89623f28b87a6004d4b785663257362d1658a729
runc version: b2567b37d7b75eb4cf325b77297b140ea686ce8f
init version: 949e6fa
Security Options:
seccomp
Profile: default
Kernel Version: 4.9.78-xxxx-std-ipv6-64
Operating System: Ubuntu 16.04.3 LTS
OSType: linux
Architecture: x86_64
CPUs: 8
Total Memory: 15.71GiB
Name: ns343385
ID: FPV3:HL67:AJDI:LMKZ:DIJK:7XZU:K257:OBXB:3DKR:RYRO:E6IM:UPLX
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false

API to get the meta-data of Greyhound server

Greyhound is pretty awesome! I'm attempting to understand how I can go about writing a new client for it but I ran into a snag. If I know the location of a greyhound server, I would like to be able to query things about the server without having to query the actual data it's serving. For instance, how do I know what point clouds are being hosted there? Is there an API to query things like:

  • Is greyhound running/active? greyhound:8080/status
  • list the point clouds greyhound is aware of so the parameter can be queried and
    at runtime? greyhound:8080/point-clouds

Is there an existing API like this? I looked at the js file and didn't see anything obvious but I'm not
all that familiar with js app dev.

Greyhound service not responding

I have install a Docker container for a Greyhound service on a server. Time to time the service going down without any relevant details through docker logs. The info request never respond (/resource/270_merge_3857/info) and Greyhound docker container need to be restarted to get back on track. Where should I look to have more info on this problem. Is it a Docker, Greyhound or Entwine index config problem?

Problem decoding Greyhound Data upon Read

I'm trying to work with the data returned by the greyhound read query. It looks like I'm losing precision somewhere along the line. Having written my own code I had assumed my struggles were a bug of mine. However I just discovered your own sample python code ( https://github.com/hobu/greyhound/pull/35/files ) that appears to have the same issue. Playing with the settings in this file, I run it to produce this query:

http://[myipaddress]:8080/resource/[MyDemoResource]/read?bounds=[-13634057.2874,4552379.08325,-13634052.2874,4552384.08325]&compress=false

It returns 30813 points from a 5 meter cube of my data. The las file that's generated looks like this:

image

You can probably see 30-4 points within the bounds of this data in this image above. When I analyze the data itself, there are really only 6 unique values in each of the X, and Y coordinates. In other words, each point visible in this image is actually about 880 superimposed vertices.

The part that I don't understand is that when I aim Potree at this same area, I see the full point density and it looks nothing like this. It's filled in, and almost solid in appearance until you really zoom in on it.

For what it's worth, this is the point breakdown per depth level (I only have data in 11-21):

11:1,12:2,13:15, 14:61, 15: 255, 16:966, 17:3534, 18:10399, 19:13031, 20:2514, 21:35

My schema:
{"baseDepth":7,"bounds":[-13635560,4545640,-6040,-13623480,4557720,6040],"boundsConforming":[-13635553.321355114,4546474.442458361,-346.13140000000004,-13623524.425497472,4556869.877397481,309.4769],"numPoints":1185748768,"offset":[-13629520,4551680,0],"reprojection":{"in":"","out":"EPSG:3857"},"scale":0.001,"schema":[{"name":"X","size":4,"type":"signed"},{"name":"Y","size":4,"type":"signed"},{"name":"Z","size":4,"type":"signed"},{"name":"Intensity","size":2,"type":"unsigned"},{"name":"ReturnNumber","size":1,"type":"unsigned"},

I feel like I'm missing something fundamental in trying to interpret the returned data. Do you have any ideas what I might be missing?

Thank you.

PDAL filtering

It would be really useful to be able to post a PDAL filter blob and have it be executed on the data before being returned to the HTTP client.

HTTP error *** could not be created.

I am attempting to query data from Greyhound.

I have data produced by Entwine located at : /opt/data/greyhound/

example : /opt/data/greyhound/autzen/

I have a configuration file for greyhound located at : /opt/data/greyhound/greyhound-config.json

I start the greyhound container with the following commands and i got the same error when i go to this URL : http://localhost:8080/resource/autzen/info

sudo docker run -it -v ~:/opt/data/greyhound -p 8080:8080 --network=webview
ernetwork connormanning/greyhound

And

sudo docker run -it -v ~:/opt/data -p 8080:8080 --network=webviewernetwork connormanning/greyhound

When i'm trying to access info i get an error in terminal.

Creating autzen
Trying /greyhound: fail - Could not read file /greyhound/autzen/entwine
Trying ~/greyhound: fail - Could not read file /root/greyhound/autzen/entwine
Trying /entwine: fail - Could not read file /entwine/autzen/entwine
Trying ~/entwine: fail - Could not read file /root/entwine/autzen/entwine
Trying /opt/data: fail - Could not read file /opt/data/autzen/entwine
HTTP error: autzen could not be created

I don't understand why the word "entwine" is added to the path.

When i add to my commands : command + -c /opt/data/greyhound/greyhound-config.json.
It doesn't change anything.

When i renamed the folder greyhound to entwine it does the same error.

I have also tried this command but i got the same error.

docker run -it --network=webviewernetwork -p 8080:8080 -v pwd:/opt/data/greyhound connormanning/greyhound "bash -c "cp /opt/data/greyhound/greyhound-config.json /var/greyhound/config.json && greyhound dockerstart && greyhound log""

So i don't understand what's going on.

I suppose something goes wrong but i can't figure it out.

Filter argument sensative to quote type

I am passing a filter in the Greyhound query to select just a subset of point classifications. If the URL is sent as

&filter={'Classification': {'$in': [1, 2]}}

I get a 400 error

Error during parsing: * Line 1, Column 21
Missing '}' or object member name

But if I change the quotes around the operator to double quotes, it is happy

filter={"Classification": {"$in": [1, 2]}}

Read query returning zero results dependant on formatting of query data

I am attempting to query data from Greyhound using scaling and offsets and am having some strange results that are not making any sense.

To test URLs I am also using Postman to send the URLs generated by my client to check they are valid and the server responds.

What I have seen is a very strange situation where if there is an equal sign between "bounds" and the value and a scale option in the URL then Greyhound returns 0 points. However if I remove the equal sign then data is returned.

So:
This returns data:
http://:8080/resource/read?depth=7&bounds[339970,154970,-2460,345030,160030,2600]&scale=0.1

This returns just the number of points as being 0:
http://:8080/resource/read?depth=7&bounds=[339970,154970,-2460,345030,160030,2600]&scale=0.1

Removing the scale returns data:
http://:8080/resource/read?depth=7&bounds=[339970,154970,-2460,345030,160030,2600]

Comparing this with your examples I see an equals sign is present. Is there some strange bug here or am I doing something wrong?

No LICENSE specified

Greyhound is released under an Apache 2.0 license. It should be stated as such in the repository.

Use gh-pages for greyhound.io

github pages should be plenty powerful enough for Greyhounds docs. Let's setup the DNS so greyhound.io points to gh-pages.

Also, move the DNS to be hosted by route53 instead of Gandi

'Hole' determination for raster queries

Currently there's a byte prepended to each point in a raster that acts as a boolean for whether the point exists or is a hole. Lots of overhead there. Should probably be a bitmap in the initial non-binary response, and also needs to be described in the client doc.

Not building: no matching function for call to ‘entwine::Reader::query(...

Running into a compilation error on:

  • Ubuntu 16.04 64bit
  • PDAL 1.3.0 with lazperf
  • entwine build from master without errors in /usr/local/include/entwine/
  • latest greyhound pull from master
> [email protected] install /usr/local/lib/node_modules/greyhound-server
> node-gyp rebuild

make: Entering directory '/usr/local/lib/node_modules/greyhound-server/build'
  CXX(target) Release/obj.target/session/src/session/bindings.o
  CXX(target) Release/obj.target/session/src/session/session.o
../src/session/session.cpp: In member function ‘std::shared_ptr<ReadQuery> Session::query(const entwine::Schema&, bool, double, const entwine::Point&, const entwine::Bounds*, std::size_t, std::size_t)’:
../src/session/session.cpp:203:31: error: no matching function for call to ‘entwine::Reader::query(const entwine::Schema&, const entwine::Bounds&, const size_t&, const size_t&, const double&, const entwine::Point&)’
                         offset)));

Poorly formed read request with bounds parameter should return status code of 400

User should get a HTTP status code of 400 when incorrectly entering the bounds.

For example,

http://data.greyhound.io/resource/red-rocks/read?depth=10&bounds[481960,4390170,481970,4390180]

where the = is missing after bounds, currently returns status code of 500 with a rather cryptic error message of "Unexpected token o".

Session sharing counts for idle sessions

Clients that are idle and connected for a long time (without destroying their session) are still included in the session sharing counts but aren't contributing to session usage. If a connection remains open and is idle for someNewConfigParameter amount of time, it should not contribute to the session sharing counts.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.