vroom-project / vroom-docker Goto Github PK
View Code? Open in Web Editor NEWDocker image for vroom and vroom-express
License: BSD 2-Clause "Simplified" License
Docker image for vroom and vroom-express
License: BSD 2-Clause "Simplified" License
ERROR: for ors Cannot start service ors: OCI runtime create failed: container_linux.go:349: starting container process caused "process_linux.go:449: container init caused "rootfs_linux.go:58: mounting \"/home/synergy/Documents/vroom-docker/data/heidelberg.osm.gz\" to rootfs \"/var/lib/docker/overlay2/16d6dd375f56cfbd9f7b8295edadf6a004dccf8e459019cd051038199712768f/merged\" at \"/var/lib/docker/overlay2/16d6dd375f56cfbd9f7b8295edadf6a004dccf8e459019cd051038199712768f/merged/ors-core/data/osm_file.pbf\" caused \"not a directory\""": unknown: Are you trying to mount a directory onto a file (or vice-versa)? Check if the specified host path exists and is the expected type
ERROR: Encountered errors while bringing up the project.
The readme states that:
The tagging scheme follows the release convention of vroom and adds patch releases for vroom-express patch releases.
This may be a bit confusing, and also problematic. Suppose we tag a v1.9.1
here for the sake of a new vroom-express
release, but then have to patch v1.9.0
to v1.9.1
in the core repo? This never actually happened but we should avoid the ambiguity.
What about simply using the vroom
version as that's the most important part (not much changes on the vroom-express
side). Then implicitly the vroom-express
version would be the latest available when pushing the image. That would work smoothly because we usually sync releases.
Hello, I have a problem with deploy version highest then VROOM_RELEASE=v1.9.0 and VROOM_EXPRESS_RELEASE=v0.7.0:
Error: Don't know how to handle 'options.size' type: undefined at Object.size (/vroom-express/node_modules/rotating-file-stream/index.js:557:19) at checkOpts (/vroom-express/node_modules/rotating-file-stream/index.js:660:20) at Object.createStream (/vroom-express/node_modules/rotating-file-stream/index.js:702:18) at Object.<anonymous> (/vroom-express/src/index.js:32:29) at Module._compile (internal/modules/cjs/loader.js:999:30) at Object.Module._extensions..js (internal/modules/cjs/loader.js:1027:10) at Module.load (internal/modules/cjs/loader.js:863:32) at Function.Module._load (internal/modules/cjs/loader.js:708:14) at Function.executeUserEntryPoint [as runMain] (internal/modules/run_main.js:60:12) at internal/main/run_main_module.js:17:47 npm ERR! code ELIFECYCLE npm ERR! errno 1 npm ERR! [email protected] start:
node src/index.js`
npm ERR! Exit status 1
�
npm ERR! Failed at the [email protected] start script.
npm ERR! This is probably not a problem with npm. There is likely additional logging output above.
npm ERR! A complete log of this run can be found in:
npm ERR! /root/.npm/_logs/2022-11-08T11_05_47_045Z-debug.log
`
Dockerfile:
`
FROM debian:buster-slim as builder
LABEL maintainer=[email protected]
WORKDIR /
RUN echo "Updating apt-get and installing dependencies..." &&
apt-get -y update > /dev/null && apt-get -y install > /dev/null
git-core
build-essential
g++
libssl-dev
libasio-dev
pkg-config
ARG VROOM_RELEASE=v1.10.0
RUN echo "Cloning and installing vroom release ${VROOM_RELEASE}..." &&
git clone https://github.com/VROOM-Project/vroom.git &&
cd vroom &&
git fetch --tags &&
git checkout -q $VROOM_RELEASE &&
make -C /vroom/src &&
cd /
ARG VROOM_EXPRESS_RELEASE=v0.8.0
RUN echo "Cloning and installing vroom-express release ${VROOM_EXPRESS_RELEASE}..." &&
git clone https://github.com/VROOM-Project/vroom-express.git &&
cd vroom-express &&
git fetch --tags &&
git checkout $VROOM_EXPRESS_RELEASE
FROM node:12-buster-slim as runstage
COPY --from=builder /vroom-express/. /vroom-express
COPY --from=builder /vroom/bin/vroom /usr/local/bin
WORKDIR /vroom-express
RUN apt-get update > /dev/null &&
apt-get install -y --no-install-recommends
libssl1.1
curl
> /dev/null &&
rm -rf /var/lib/apt/lists/* &&
# Install vroom-express
npm config set loglevel error &&
npm install &&
# To share the config.yml & access.log file with the host
mkdir /conf
COPY ./config.yml /vroom-express/config.yml
COPY ./docker-entrypoint.sh /docker-entrypoint.sh
ENV VROOM_DOCKER=osrm
VROOM_LOG=/conf
HEALTHCHECK --start-period=10s CMD curl --fail -s http://localhost:3000/health || exit 1
EXPOSE 3000
ENTRYPOINT ["/bin/bash"]
CMD ["/docker-entrypoint.sh"]
`
Work fine on: VROOM_RELEASE=v1.9.0 and VROOM_EXPRESS_RELEASE=v0.7.0
Got errors on: VROOM_RELEASE=v1.10.0 and VROOM_EXPRESS_RELEASE=v0.8.0
Good morning,
I'm terribly sorry to bug your all. I know you've answered similar questions in the past, which I've read (closed issues) and searched web for an answer. But I'm honestly still stumped. I'm a noob at this, clearly. I tried two different docker-compose files and updated config.yml. I am not able to connect with VROOM using the first docker-compose and not able to get VROOM to communicate with OSRM with the second (more details below). Here are the two docker-compose files:
The one from this repository:
version: "2.4"
services:
vroom:
network_mode: host
image: vroomvrp/vroom-docker:v1.10.0
container_name: vroom
volumes:
- E:/Docker/vroom-conf/:/conf
environment:
- VROOM_ROUTER=osrm # router to use, osrm, valhalla or ors
depends_on:
- osrm
osrm:
image: osrm/osrm-backend
container_name: osrm
restart: always
ports:
- 5000:5000
volumes:
- E:/Docker:/data
command: "osrm-routed --max-matching-size 1000 --max-table-size 1000 --max-viaroute-size 1000 --algorithm mld /data/us-latest.osrm"
And this other one:
version: "3"
services:
osrm:
container_name: osrm
image: osrm/osrm-backend:v5.24.0
restart: always
ports:
- "5000:5000"
volumes:
- E:/Docker:/data
command: "osrm-routed --max-matching-size 1000 --max-table-size 1000 --max-viaroute-size 1000 --algorithm mld /data/us-latest.osrm"
networks:
tsp_network:
aliases:
- osrm
vroom:
container_name: vroom
image: vroomvrp/vroom-docker:v1.10.0
restart: always
ports:
- "3000:3000"
volumes:
- E:/Docker/vroom-conf/:/conf
depends_on:
- osrm
networks:
tsp_network:
aliases:
- vroom
networks:
tsp_network:
driver: bridge`
Additionally, I modified my config.yml, as mentioned in previous posts, to the following:
routingServers:
osrm:
car:
host: 'osrm'
port: '5000'
bike:
host: 'osrm'
port: '5000'
foot:
host: 'osrm'
port: '5000
With the first docker-compose file, I'm having trouble connecting to VROOM all together. I've tried sending requests to localhost, my local IP addresses, 0.0.0.0, and 127.0.0.1. Each time, I get a "Couldn't connect to server" message. When I look-up which ports are open on my computer, I don't see 3000 (see below screenshots).
The second docker-compose file allows me to connect to VROOM at localhost but can't seem to make it communicate with OSRM. I keep getting the following message, no matter what I try:
"{"code":3,"error":"Failed to connect to osrm:5000"}"
I truly appreciate the help and everything you're doing. This is an incredible project and I am really excited to start playing around with it. Have a great day!!
Just referencing the PR here as well, also for #8 .
GIScience/openrouteservice#717
Will fix the docker-compose.yml
here once it's merged.
So Dockerhub chose to take the Travis CI route to give up on hosting images for free and push eligible orgs to their “OSS program”. Having the Travis experience, I’d rather go for GitHub right away.
Since we don’t have a “latest” image, there’s no action to take. Old images will continue to be hosted on Dockerhub, but future images will be published on GitHub packages, it’s pretty seamless, I already did that for most of our images.
I am not trying to build a new image but just run the one you guys created.
I am running Docker Desktop for Windows.
I ran this command:
docker run -dt --name vroom --net host -v /vroom-vol -e VROOM_ROUTER=osrm vroomvrp/vroom-docker:v1.10.0
Got this feedback from the command:
Unable to find image 'vroomvrp/vroom-docker:v1.10.0' locally
v1.10.0: Pulling from vroomvrp/vroom-docker
f7ec5a41d630: Pull complete
af85e22911d9: Pull complete
07060573ed70: Pull complete
a082ae6404c8: Pull complete
0aa4da5b6a9b: Pull complete
e6800109a6e6: Pull complete
17b951da241f: Pull complete
126790e7b93b: Pull complete
3a78032756fb: Pull complete
Digest: sha256:c2971c02a5c2f2e4b1c8507bde40db4a0f305c1a2fefaec8a0db02e76e01a53b
Status: Downloaded newer image for vroomvrp/vroom-docker:v1.10.0
016aeb22cfbc20a42f1c0ba828f62228a3f02ea7e5f2e8aef8c5f18e87db514d
Then tried to post to the images and got this in the log:
{"code":3,"error":"Failed to connect to 0.0.0.0:5000"}
I have been beating my head against the wall on this problem for the last 10 hours. Please someone help. This is the first time I have ever used Docker
@jcoupey has tagged a release candidate version v1.8.0-rc.1
We are actually using the image available on dockerhub within our test and deployment process.
As I want to test this version, which contains some interesting fixes, I do not want to change the build process for this specific moment between a RC and a proposer release.
@nilsnolde what do you think about it ?
ref #55
Did some stupid stuff in docker-entrypoint.sh
..
Will have to implement smth that prohibits a Docker user to change location of the logdir
config variable (alternatively, we could employ another env var, but I think it should be fair enough this way).
Some adjustments are required in order to build an image against the next v1.7.0
release of VROOM, see https://github.com/VROOM-Project/vroom/blob/release/1.7/CHANGELOG.md for details.
It basically comes down to dropping all boost
dependencies and adding asio
as a standalone dependency (libasio-dev
package).
Hopefully this will result in a decreased size for the image, would be interesting to measure the difference.
Hi,
I'm using vroom-docker and the CLI flags c, g, x, t
are working fine for me, meaning they are actually processed correctly and I get coherent results.
However, when I pass the new arg -l
for the timeout, it seems it's not taken into account: the responses seem to ignore this parameter and the runtime it's exactly the one I obtain when that flag it's not passed at all.
My question: is the timeout flag -l
expected to work also in the docker architecture?
What is the endpoint of the vroom docker image for optimization for multiple points routing?
Is that normal?
Creating network "vroom-docker_default" with the default driver
Creating osrm ... done
Creating vroom ... done
Attaching to osrm, vroom
osrm | [warn] Missing/Broken File: /data/map.osrm.ramIndex
osrm | [warn] Missing/Broken File: /data/map.osrm.fileIndex
osrm | [warn] Missing/Broken File: /data/map.osrm.edges
osrm | [warn] Missing/Broken File: /data/map.osrm.geometry
osrm | [warn] Missing/Broken File: /data/map.osrm.turn_weight_penalties
osrm | [warn] Missing/Broken File: /data/map.osrm.turn_duration_penalties
osrm | [warn] Missing/Broken File: /data/map.osrm.datasource_names
osrm | [warn] Missing/Broken File: /data/map.osrm.names
osrm | [warn] Missing/Broken File: /data/map.osrm.timestamp
osrm | [warn] Missing/Broken File: /data/map.osrm.properties
osrm | [warn] Missing/Broken File: /data/map.osrm.icd
osrm | [warn] Missing/Broken File: /data/map.osrm.maneuver_overrides
osrm | [error] Required files are missing, cannot continue
osrm exited with code 1
vroom |
vroom | > [email protected] start /vroom-express
vroom | > node src/index.js
vroom | vroom-express listening on port 3000!
Hey guys, can we update this to the last VROOM version? Thank you so much!
We’re resting already, which is great. But we only test the self contained example instances. It’d be worthwhile to test the integration with all routers with Andorra or so, so we get notified ahead of time when smth breaks. Best in a scheduled GitHub Action, e.g. every 2 weeks as there’s hardly any activity here unless we’re releasing.
Hi, I have a fresh ec2 instance t2 2xlarge running and have been trying to use docker to get the full stack running with docker-compose.
I'll be using OSRM so I have started with the following steps:
These files all process with no errors.
I then docker-compose up with my docker-compose.yml
version: "2.4" services: vroom: network_mode: host image: vroomvrp/vroom-docker:v1.8.0 container_name: vroom volumes: - ./vroom-conf/:/conf environment: - VROOM_ROUTER=osrm # router to use, osrm or ors depends_on: - osrm osrm: image: osrm/osrm-backend container_name: osrm restart: always ports: - 5000:5000 volumes: - ./osrm:/data command: "osrm-routed --max-matching-size 1000 --max-table-size 1000 --max-viaroute-size 1000 --algorithm mld /data/australia-latest.osrm"
I am getting an Missing/Broken File: /data/australia-latest.datasource_names and [error] Required files are missing, cannot continue.
I think this means that I have not correctly mounted the files inside my docker container, but as a docker newbie I don't know where I'm going wrong.
Any help
I deployed the VROOM docker container on my own server ( 8vCPU, 16Gb RAM - provider DO ). Everything works just perfect, great soft.
Then I tried and test it on 1900 Waypoints with 70 vehicles. It works but I get calculated optimized routes in 15 mins.
I run docker stats
command and see couples of interesting numbers:
and then
if I understand correctly on the first pic osrm calculates matrix, the second one vroom got the matrix and start to optimize and build routes. am I right?
here are my questions:
I've deployed successfully my ORS container
{
"status": "ready"
}
I also have deployed successfully my VROOM container, but I got this error when submitting the request:
{
"code": 3,
"error": "Failed to connect to 0.0.0.0:8080"
}
Note that they have different domain names, but they are running on the same server (Google Cloud Run) which I tested with a simple ping -a https://my-external-host-for-ORS.app
and ping -a https://my-external-host-for-VROOM.app
and the result is the same IP as expected.
One thing that is not clear for me is the usage of network_mode: host
host
is just a flag or it should be the host address? i.e: network_mode: https://my-external-host-for-ORS.app [or IP]
and since they are running on the same server, I believe 0.0.0.0:8080
should work, right?
Now I'm wondering which setting I'm missing in order to make VROOM and ORS communicate with each other, I'm pretty sure that might be something related to the networking setting.
Any help would be appreciated.
ORS was easy for me. I'd appreciate if someone with OSRM experience (I never even set it up myself) could provide the example for OSRM.
New issue is that orsm-data images name is incorrect.
Pulling osrm (osrm-data/osrm-backend:)...
ERROR: The image for the service you're trying to recreate has been removed. If you continue, volume data could be lost. Consider backing up your data before continuing.
The image name should be osrm/osrm-backend
Looking briefly at the runtime requirements for libosrm from vroom's makefile I guess this wouldn't bloat an image too much?
https://github.com/VROOM-Project/vroom/blob/24a6dd54175f40f2fb58be1c2ff76dd951a957f8/src/makefile#L30
I also do think it would be quite handy to have the lib rather than the full-fledged HTTP docker container for OSRM, should be more lightweight in total. Of course we'd need to incorporate the lib into the vroom image.
My suggestion would be: two images for each vroom release, one with libosrm, one without. On Dockerhub (I think) it's quite tedious to set up build arguments and such. Anyways, dockerhub (as so many other "free" services) restricts anonymous pulls quite a lot these days. Maybe it's also time to move to github actions? (until they also restrict of course :D, but pretty sure they'll wait until they got a really good market share for CI & packaging, the classic drug dealer move..)
What do you think @jcoupey? I'd target that for the next vroom release. Any idea when that'll be approx?
Hi! Can we release v1.12.0 to Dockerhub? I guess tag need to be created to kick the pipeline
It would be nice to have #53 landed before release
Hi all,
I need to have:
OpenRouteService -> Vroom -> Vroom Express
running on my self hosted AWS Server so I can call the Rest API for commands specified here:
https://github.com/VROOM-Project/vroom/blob/master/docs/API.md
I have good programming experience in C# but I never worked with servers at all. I am struggling to get anything started for days now and in the description, it says something about 3 minutes, which seems embarrasing.
I even tried to find help over freelancer.com but all applicants turned out to be scammers who tried to get my AWS main credentials.
I watched this Tutorial about deploying docker containers in AWS EC2 (https://www.youtube.com/watch?v=lO2wU2rcGUw&ab_channel=CloudSkills) but when typing:
docker pull vroomvrp/vroom-docker
I already get the error: manifest for vroomvrp/vroom-docker: latest not found.
I would appreciate any help, hint or consultation offer!
the entry point script is not exec'n npm start. This means it creates a sub shell and does not receive SIGINT, SIGTERM, etc. from docker daemon: like a CTRL+C. The docker daemon will wait 10 seconds (default timeout) before force downing a container. The result is, a 10s hang whenever vroom container is stopped. The below should fix the issue.
cd /vroom-express
exec VROOM_ROUTER=${VROOM_ROUTER} VROOM_LOG=${VROOM_LOG} npm start
hi! what license having this project?
great job!
currently we only test for a vroom-express failure it seems, meaning vroom is never actually called and issues like #61 can go unnoticed.
we definitely need to make that more robust and test for a 200 as well with the example2.json.
in fact, I just see that we actually do want to request for a 200, but curl
doesn't fail unless called with --fail
:
Lines 31 to 32 in df2a620
easy fix, I'll that real quick.
It could be great to have a way to initialize and run a container with a specific map just by passing the URL of the desired map, let's say, for example: http://download.geofabrik.de/north-america/us/washington-latest.osm.pbf
The idea behind this is that the map is downloaded, extracted, and ready to run.
We could just pass the url and the profile (example: opt/car.lua) as parameters.
I have used (vroomvrp/vroom-docker:v1.9.0) image and integrated with ors using docker-compose file to setup
http://localhost:8080/ors/v2/health is showing ready, as I get to know optimisation endpoint works at port 3000, to know vroom is up and ready I tried to access localhost:3000/health but it was giving error (Connection was refused by the server).
DOCKER-COMPOSE file:
version: "2.4"
services:
vroom:
network_mode: host
image: vroomvrp/vroom-docker:v1.9.0
container_name: vroom
volumes:
- ./vroom-conf/:/conf
environment:
- VROOM_ROUTER=ors # router to use, osrm, valhalla or ors
depends_on:
- ors
ors:
container_name: ors
ports:
- 8080:8080
image: openrouteservice/openrouteservice:latest
volumes:
- ./graphs:/ors-core/data/graphs
- ./elevation_cache:/ors-core/data/elevation_cache
- ./logs/ors:/var/log/ors
- ./logs/tomcat:/usr/local/tomcat/logs
- ./conf:/ors-conf
- ./your_osm.pbf:/ors-core/data/osm_file.pbf # alter path to your local OSM PBF file, e.g. from https://download.geofabrik.de
environment:
- BUILD_GRAPHS=False # Forces the container to rebuild the graphs, e.g. when PBF is changed in app.config
- "JAVA_OPTS=-Djava.awt.headless=true -server -XX:TargetSurvivorRatio=75 -XX:SurvivorRatio=64 -XX:MaxTenuringThreshold=3 -XX:+UseG1GC -XX:+ScavengeBeforeFullGC -XX:ParallelGCThreads=4 -Xms1g -Xmx2g"
- "CATALINA_OPTS=-Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.port=9001 -Dcom.sun.management.jmxremote.rmi.port=9001 -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Djava.rmi.server.hostname=localhost"
config.yml file:
cliArgs:
geometry: false # retrieve geometry (-g)
threads: 4 # number of threads to use (-t)
explore: 5 # exploration level to use (0..5) (-x)
limit: '1mb' # max request size
logdir: '/..' # the path for the logs relative to ./src
logsize: '100M' # max log file size for rotation
maxlocations: 1000 # max number of jobs/shipments locations
maxvehicles: 200 # max number of vehicles
override: true # allow cli options override (-g, -t and -x)
path: '' # VROOM path (if not in $PATH)
port: 3000 # expressjs port
router: 'ors' # routing backend (osrm, libosrm or ors)
timeout: 300000 # milli-seconds
baseurl: '/' #base url for api
routingServers:
osrm:
car:
host: '0.0.0.0'
port: '5000'
bike:
host: '0.0.0.0'
port: '5000'
foot:
host: '0.0.0.0'
port: '5000'
ors:
driving-car:
host: '0.0.0.0'
port: '8080'
driving-hgv:
host: '0.0.0.0'
port: '8080'
cycling-regular:
host: '0.0.0.0'
port: '8080'
cycling-mountain:
host: '0.0.0.0'
port: '8080'
cycling-road:
host: '0.0.0.0'
port: '8080'
cycling-electric:
host: '0.0.0.0'
port: '8080'
foot-walking:
host: '0.0.0.0'
port: '8080'
foot-hiking:
host: '0.0.0.0'
port: '8080'
Hey mate,
I am running docker image using the same yml file settings but i dont have osrm and ors running locally in the server. As Vroom is independent from osrm and osm and we can send travel time matrix to the solver. Is there any change i need to do to make it run without osm and ors. Right now, i am receiving error:
{
"code": 2,
"error": "Invalid profile: car."
}
While i am sending this DATA:
{"vehicles":[{"id":998,"start_index":0,"end_index":4,"capacity":[5000],"start":[39.1462209,21.57486086],"end":[39.180715,21.47527]}],"jobs":[],"shipments":[{"amount":[1],"pickup":{"id":1,"location":[39.150834,21.576574],"location_index":1},"delivery":{"id":4,"location_index":4,"location":[39.148227,21.550182]}},{"amount":[1],"pickup":{"id":2,"location":[39.149249,21.589999],"location_index":2},"delivery":{"id":4,"location_index":4,"location":[39.148227,21.550182]}},{"amount":[1],"pickup":{"id":3,"location":[39.148227,21.550182],"location_index":3},"delivery":{"id":4,"location_index":4,"location":[39.180715,21.47527]}}],"matrix":[[0,86,227,364,958],[154,0,380,418,1045],[301,293,0,521,1086],[300,365,478,0,648],[872,936,1049,660,0]]}
vroom docker works when i send a 5 job request with custom matrix but when i send 250 jobs with custom matrix then i dont get respond.when i try with postman for small request i get succesful response and for large one i get socket hang up error. I changed request timeout to never timeout. I guess problem is custom matrix because for 250 jobs its about 62.500 lines. How can i solve this problem ?Does anyone have an idea? Thanks
I think it is useful to insert some sample files in this project in a directory ./conf/ already inserted.
it's a good idea?
Switched to version 1.12.1 to get changes made due to #58, and now I get this error when starting the container.
vroom | vroom-express listening on port 3000!
vroom | Thu, 11 Aug 2022 12:12:47 GMT: vroom: /usr/lib/aarch64-linux-gnu/libstdc++.so.6: version `GLIBCXX_3.4.26' not found (required by vroom)
vroom |
This issue is not present in 1.12.0.
I have installed this and got it running, got code 200 back when running the health test. When I try to run the example command I get the response
{"code":3,"error":"Failed to connect to 0.0.0.0:5000"}
What am I doing wrong?
Hi,
I cannot manage to run the server on mac. I do get the following error:
ERROR: for ors Cannot start service ors: OCI runtime create failed: container_linux.go:349: starting container process caused "process_linux.go:449: container init caused \"rootfs_linux.go:58: mounting \\\"/Users/josefa/projects/vroom-docker-master/data/heidelberg.osm.gz\\\" to rootfs \\\"/var/lib/docker/overlay2/3eda84bd2d4f9e7b9b2c63c48866d28c40d9b2644b3c78d5436dfec2abb802fb/merged\\\" at \\\"/var/lib/docker/overlay2/3eda84bd2d4f9e7b9b2c63c48866d28c40d9b2644b3c78d5436dfec2abb802fb/merged/ors-core/data/osm_file.pbf\\\" caused \\\"not a directory\\\"\"": unknown: Are you trying to mount a directory onto a file (or vice-versa)? Check if the specified host path exists and is the expected type ERROR: Encountered errors while bringing up the project.
Thanks,
José.
Hi.
I use docker-compose.yml
to run, however when I check http://localhost:3000/health it says not working.
Here is the content of my docker-compose.yml
file:
docker-compose.txt
Hope to get help soon. Thanks.
Hello there,
I am new to docker and vroom so hope you could help guide me please?
I got osrm backend installed locally, I would like to use optimisation for vehicle routing problem.
Do i need to install open route service locally to do this or OSRM should do the job?
These are the steps I already took
I must have missed something - please could you kindly guide me what I need to do?
Thank you so much
Ped
docker-compose up -d
results in the following error.
ERROR: pull access denied for vroomproject/vroom-docker, repository does not exist or may require 'docker login': denied: requested access to the resource is denied
Has this image been moved / should this be a different image?
Full output:
docker-compose --version
docker-compose version 1.25.4, build 8d51620a
root@aap-routing:~/routing/vroom-docker# docker-compose up -d
Creating network "vroom-docker_default" with the default driver
Pulling ors-app (openrouteservice/openrouteservice:latest)...
latest: Pulling from openrouteservice/openrouteservice
50e431f79093: Pull complete
dd8c6d374ea5: Pull complete
c85513200d84: Pull complete
55769680e827: Pull complete
e27ce2095ec2: Pull complete
5943eea6cb7c: Pull complete
3ed8ceae72a6: Pull complete
da8f33cdc431: Pull complete
3d8eda6fc7ed: Pull complete
42854301ae19: Pull complete
91fc3cb96575: Pull complete
47d77f5ddc71: Pull complete
185ab9901aab: Pull complete
7e65ab34a025: Pull complete
9ba52ff6c64f: Pull complete
0375d7347e2f: Pull complete
9567eab3e7ce: Pull complete
Digest: sha256:d2773a7935de1229c07f9dfc8229bc084cfbb09ced853068e053e8fbbe3536fb
Status: Downloaded newer image for openrouteservice/openrouteservice:latest
Pulling vroom (vroomproject/vroom-docker:v1.6.0)...
ERROR: The image for the service you're trying to recreate has been removed. If you continue, volume data could be lost. Consider backing up your data before continuing.
Continue with the new image? [yN]y
Pulling vroom (vroomproject/vroom-docker:v1.6.0)...
ERROR: pull access denied for vroomproject/vroom-docker, repository does not exist or may require 'docker login': denied: requested access to the resource is denied
I just stumbled upon VROOM the other day, so I haven't been here long enough to know the plans. Is a docker image / implementation planned for the near future? Is there an example of a current docker implementation elsewhere?
When VROOM-Project/vroom-express#44 lands, we can use that to check for the Docker container's health.
I followed the steps and build my test in https://api-vroom.cr.onway.app/
But when I try with test request sometimes I got (using VROOM_ROUTER="ors"):
2020-11-06T13:05:18.465245743Z Fri, 06 Nov 2020 13:05:18 GMT: [Error]
2020-11-06T13:05:18.465584149Z Fri, 06 Nov 2020 13:05:18 GMT: Failed to connect to 0.0.0.0:5000
2020-11-06T13:05:18.465862641Z Fri, 06 Nov 2020 13:05:18 GMT:
Changing enviroment variables and using (VROOM_ROUTER="osrm") I got:
2020-11-06T13:12:49.048898242Z Fri, 06 Nov 2020 13:12:49 GMT: [Error]
2020-11-06T13:12:49.049249511Z Fri, 06 Nov 2020 13:12:49 GMT: Invalid profile: car.
2020-11-06T13:12:49.049500938Z Fri, 06 Nov 2020 13:12:49 GMT:
Any help?
The current (decompressed) image size is 1.16 GB. That really brings down the joy when pulling from Dockerhub.
Proposed solution:
Multi-stage build similar to OSRM: https://hub.docker.com/r/osrm/osrm-backend/dockerfile. That has only a decompressed size of 100 MB.
Next one bites the dust.. Got an email from Dockerhub telling me that auto-builds will not be support from 18.06. on, which would "break" our release system here. One alternative would be to join their "Docker Open Source program". Another to set up the release system on Github Actions.
Hello! I want run vroom with OSRM and osrm image get an error:
osrm | [warn] Missing/Broken File: /data/map.osrm.ramIndex
osrm | [warn] Missing/Broken File: /data/map.osrm.fileIndex
osrm | [warn] Missing/Broken File: /data/map.osrm.edges
osrm | [warn] Missing/Broken File: /data/map.osrm.geometry
osrm | [warn] Missing/Broken File: /data/map.osrm.turn_weight_penalties
osrm | [warn] Missing/Broken File: /data/map.osrm.turn_duration_penalties
osrm | [warn] Missing/Broken File: /data/map.osrm.datasource_names
osrm | [warn] Missing/Broken File: /data/map.osrm.names
osrm | [warn] Missing/Broken File: /data/map.osrm.timestamp
osrm | [warn] Missing/Broken File: /data/map.osrm.properties
osrm | [warn] Missing/Broken File: /data/map.osrm.icd
osrm | [warn] Missing/Broken File: /data/map.osrm.maneuver_overrides
osrm | [error] Required files are missing, cannot continue
osrm exited with code 1
Can u help me the resolving?
In the readme section, it jumps to mentioning what you should do if you want to run ORS in a different container. Can this section of the readme open with a preferred or quickstart way to connect ORS (Via docker)?
Along with v1.7.0
for vroom
, we have a new v0.7.0
release for vroom-express
.
I'm trying to install Vroom and ORS on my windows 10 desktop to use with the openrouteservice r package. I install the 2 docker images as follows:
ORS
docker run -dt --name ors -p 8080:8080 -v $PWD/graphs:/ors-core/data/graphs -v $PWD/elevation_cache:/ors-core/data/elevation_cache -v $PWD/conf:/ors-conf -e "JAVA_OPTS=-Djava.awt.headless=true -server -XX:TargetSurvivorRatio=75 -XX:SurvivorRatio=64 -XX:MaxTenuringThreshold=3 -XX:+UseG1GC -XX:+ScavengeBeforeFullGC -XX:ParallelGCThreads=4 -Xms1g -Xmx2g" -e "CATALINA_OPTS=-Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.port=9001 -Dcom.sun.management.jmxremote.rmi.port=9001 -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Djava.rmi.server.hostname=localhost" openrouteservice/openrouteservice:latest
VROOM
docker run -dt --name vroom -p 3000:3000 -v $PWD/conf:/conf -e VROOM_ROUTER=ors vroomvrp/vroom-docker:v1.10.0
I can connect and see the health of ORS but not VROOM using the following:
ors
http://localhost:8080/ors/health - {"status":"ready"}
vroom
http://localhost:3000/health - it doesn't return anything.
if I do:
http://localhost:3000/vroom/health - then I get the error - Cannot GET /vroom/health
If i try to access vroom via the R package openrouteservice I get a 404 error..
I also used curl to check and here is what I got:
ORS
curl http://host.docker.internal:8080
StatusCode : 200
StatusDescription :
Content :
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8" />
<title>Apache Tomcat/8.5.39</title>
<link href="favicon.ico" rel="icon" type="image/x-icon" />
<...
RawContent : HTTP/1.1 200
Transfer-Encoding: chunked
Content-Type: text/html;charset=UTF-8
Date: Sat, 24 Jul 2021 18:29:03 GMT
...
Forms : {}
Headers : {[Transfer-Encoding, chunked], [Content-Type, text/html;charset=UTF-8], [Date, Sat, 24 Jul 2021 18:29:03 GMT]}
Images : {@{innerHTML=; innerText=; outerHTML=; outerText=; tagName=IMG; alt=[tomcat logo];
src=tomcat.png}}
InputFields : {}
Links : {@{innerHTML=Home; innerText=Home; outerHTML=Home; outerText=Home; tagName=A;
href=https://tomcat.apache.org/}, @{innerHTML=Documentation; innerText=Documentation; outerHTML=Documentation;
outerText=Documentation; tagName=A; href=/docs/}, @{innerHTML=Configuration; innerText=Configuration; outerHTML=Configuration; outerText=Configuration; tagName=A; href=/docs/config/}, @{innerHTML=Examples;
innerText=Examples; outerHTML=Examples; outerText=Examples; tagName=A; href=/examples/}...}
ParsedHtml : mshtml.HTMLDocumentClass
RawContentLength : 11266
VROOM
curl http://host.docker.internal:3000
curl : Cannot GET /
At line:1 char:1
+ CategoryInfo : InvalidOperation: (System.Net.HttpWebRequest:HttpWebRequest) [Invoke-WebRequest], WebException
+ FullyQualifiedErrorId : WebCmdletWebResponseException,Microsoft.PowerShell.Commands.InvokeWebRequestCommand
I also tried calling vroom with the following:
curl --header "Content-Type:application/json" --data '{"vehicles":[{"id":0,"start":[42.316,-71.033],"end":[42.360,-71.093]}],"jobs":[{"id":0,"location":[42.358,-71.095]},{"id":1,"location":[42.339,-71.094]}],"options":{"g":true}}' http://localhost:3000
Invoke-WebRequest : A positional parameter cannot be found that accepts argument 'Content-Type:application/json'.
At line:1 char:1
+ CategoryInfo : InvalidArgument: (:) [Invoke-WebRequest], ParameterBindingException
+ FullyQualifiedErrorId : PositionalParameterNotFound,Microsoft.PowerShell.Commands.InvokeWebRequestCommand
I'm sure I'm doing something wrong but I'm new to Docker and I've spent 10+ hours trying to figure this out so any help would be greatly appreciated.
Thanks. Tom....
Hi, I am quite new to docker and setting up local servers.
I have followed all the instructions for setting up OSRM and VROOM, but when running a pyhton script querying VROOM I get this response
{'code':3, 'error': Failed to connect to 0.0.0.0:5000}
The query works if launched for the demo server, and OSRM is running correctly on port 5000
Any idea on where the problem might be?
Thank you :)
Hello,
Why after 1.9.0 version no profile driving-car using ors route? If downgrade to 1.8.0 its work fine.
Version: 1.12.0 & 0.11.0:
{ "vehicles":[ { "id":1, "start_index":0, "profile":"driving-car", .....
Got response:
{ "code": 1, "error": "bad optional access" }
And log inside container:
`
[email protected] start /vroom-express
node src/index.js
vroom-express listening on port 3000!
Wed, 09 Nov 2022 10:13:19 GMT: [Error] bad optional access
Wed, 09 Nov 2022 10:13:30 GMT: [Error] bad optional access
`
And if I change profile to car, its counted fine:
{ "vehicles":[ { "id":1, "start_index":0, "profile":"car", .....
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.