This repository contains:
- A set requirements file that combines all packages needed to run a Kinto server with a known good set of dependencies.
- An example configuration file to run it.
The most important function of this repository is to build a Docker image with a set of known working dependencies and then ship that to DockerHub.
You need Docker and docker-compose
. The simplest way to test that
all is working as expected is to run:
$ docker-compose run web migrate # only needed once
$ docker-compose run tests
Note
The run web migrate
command is only needed once, to prime the
PostgreSQL server. You can flush
all the Kinto data in your local persistent PostgreSQL with
curl -XPOST http://localhost:8888/v1/__flush__
That will start memcached
, postgresql
, autograph
(at autograph:8000
)
and Kinto (at web:8888
) and lastly the tests
container that primarily
uses curl http://web:8888/v1
to test various things.
When you're done running the above command, the individual servers will still be running and occupying those ports on your local network. When you're finished, run:
$ docker-compose stop
The simplest form of debugging is to start the Kinto server (with uwsgi
,
which is default) in one terminal first:
$ docker-compose up web
Now, in a separate terminal, first check that you can reach the Kinto server:
$ curl http://localhost:8888/v1/__heartbeat__
$ docker-compose run tests
Suppose you want to play with running the Kinto server, then go into
a bash
session like this:
$ docker-compose run --service-ports --user 0 web bash
Now you're root
so you can do things like apt-get update && apt-get install jed
to install tools and editors. Also, because of the --service-ports
if you do
start a Kinto server on :8888
it will be exposed from the host.
For example, instead of starting Kinto with uwsgi
you can start it
manually with kinto start
:
$ kinto start --ini config/example.ini
Another thing you might want to debug is the tests
container that does
the curl
commands against the Kinto server. But before you do that,
you probably want to start the services:
$ docker-compose up web
$ docker-compose run tests bash
Now, from that bash
session you can reach the other services like:
$ curl http://autograph:8000/__heartbeat__
$ curl http://web:8000/v1/__heartbeat__
Most common use-case with kinto-dist
is that you want to upgrade one
of the dependencies. All dependencies are listed in:
requirements/default.txt
requirements/constraints.txt
requirements/prod.txt
If there's a package you want to upgrade or add, do that to the
requirements/default.txt
file. If you find out that what you're adding
requires its own dependencies, add that to requirements/constraints.txt
.
To upgrade the requirements file, install hashin globally on your laptop and then run the following (example) command:
$ hashin -r requirements/default.txt myhotnewpackage
Or if you know the exact version you need:
$ hashin -r requirements/default.txt myhotnewpackage==1.2.3
If you just want to upgrade an existing package, based on the latest version available on PyPi you do it as if it's a new package. For example:
$ hashin -r requirements/default.txt requests
To test that this installs run:
$ docker-compose build web
If it fails because pip
believes your new package has other dependencies
not already mentioned in requirements/constraints.txt
add them like this:
$ hashin -r requirements/constraints.txt imneedy alsoneeded
And finally, run docker-compose build web
again.
We respect SemVer here. However, the "public API" of this package is not the user-facing API of the service itself, but is considered to be the set of configuration and services that this package and its dependencies use. Accordingly, follow these rules:
- MAJOR must be incremented if a change on configuration, system, or third-party service is required, or if any of the dependencies has a major increment
- MINOR must be incremented if any of the dependencies has a minor increment
- PATCH must be incremented if no major nor minor increment is necessary.
In other words, minor and patch versions are uncomplicated and can be deployed automatically, and major releases are very likely to require specific actions somewhere in the architecture.
By default, when you start the web
container with docker-compose up web
it actually starts two servers. One kinto
server on :8888
and
also one server on :9999
. It's a Python web server built on top of
http.server
and its "raison d'être" is to be able to query the
.eml
files in the /app/mail
directory of the web
container.
The reason why it exists is because the web
container will send emails
that are redirected to disk because of mail.debug_mailer = true
in
config/example.ini
and, in CircleCI, two different containers can't
reach the same file system. So this is a simple way for one container to
ask another container about its .eml
files over HTTP.
There are two docker-compose
config files. The only difference is,
and should remain so, is that docker-compose.yml
mounts host file systems
and the docker-compose.ci.yml
does not. Just remember, if you make a
change to one, replicate it in the other. It must always be possible to do what
CircleCI does locally and vice versa.