Prepare a dump of the DB to speed up the launch of cve-search container.
The dump file is data.init.tgz
.
To rebuild such a dump, just go to a running local instance, kill mongo and redis, dump the db:
In your running cve-search local container (don't do it on a prod container to avoid stop services)
# Kill mongo
kill $(pidof -x "mongod")`
# Kill redis
kill $(pidof -x "redis-server")
# Dump the current DB
tar -C /data -czvf /tmp/data.init.tgz .
On your host:
# Copy the dump from the container to your host
# Adapt the container name from cve to FOO if you called your cve-search container FOO
mkdir -p ./files
cd ./files
docker cp cve:/tmp/data.init.tgz ./
# To avoid the 100M max file size limitation on github, split this big file into pieces
rm -fr initdb_split_*
split -b 10m data.init.tgz initdb_split_
# To restore the original file, use cat
cat initdb_split_* > data.init.tgz