smebberson / docker-alpine Goto Github PK
View Code? Open in Web Editor NEWDocker containers running Alpine Linux and s6 for process management. Solid, reliable containers.
License: MIT License
Docker containers running Alpine Linux and s6 for process management. Solid, reliable containers.
License: MIT License
I'm compiling a native library for OpenZWave and using a node package node-openzwave-shared
. The source seems to compile fine but when using the node package it runs process.dlopen()
and throws an error Dynamic loading not supported
. This is apparently due to using a static binary for node.js?(See this comment)
The suggestion at the linked comment is to remove --fully-static
flag. I'm currently using alpine-nodejs
but would like to move to using the consul
variants at a later stage, any chance of removing the flag or having a variant to allow this?
The issue seems to be described on the nodejs wiki too.
I'd like to see an openjdk-based image using alpine-base
.
My team works with the Play Framework, which is perfectly capable of running in the standard "docker way" - except that the openjdk/alpine
image doesn't have the s6 overlay, and thus I believe will suffer from the resolv.conf issue, and thus could benefit from the go-dnsmasq functionality you have here.
If there's interest, I will fork and submit a PR.
I use the base package alpine-apache to additionally install
php-apache2.
RUN apk add --update \
php-apache2 \
php-sqlite3 \
php-xml \
php-curl \
php-dom \
php-iconv \
php-pdo_sqlite \
php-json \
php-ctype \
&& rm -rf /var/cache/apk/*
When I try to start the container I get the following errors
Jän 08 22:56:15 <hostname> docker[19307]: [Sun Jan 08 21:56:15.136430 2017] [core:error] [pid 201] (2)No such file or directory: AH00099: could not create /run/apache2/http
Jän 08 22:56:15 <hostname> docker[19307]: [Sun Jan 08 21:56:15.136455 2017] [core:error] [pid 201] AH00100: httpd: could not log pid to file /run/apache2/httpd.pid
Any ideas where this could come from? In the beginning my containers were working, this only happened after I rebuild it in the recent weeks (i think around two weeks ago).
When running one of the examples and when I hit ctrl+c to terminate I see:
==> Caught signal: terminated
==> Gracefully shutting down agent...
2016/01/06 20:45:30 [INFO] consul: client starting leave
execlineb: usage: execlineb [ -p | -P | -S nmin ] [ -q | -w | -W ] [ -c commandline ] script args
[cont-finish.d] executing container finish scripts...
Perhaps related but at the very end I see s6-svscanctl
error:
[s6-finish] sending all processes the TERM signal.
2016/01/06 20:45:30 [ERR] dns: error starting tcp server: accept tcp 127.0.0.1:8600: use of closed network connection
2016/01/06 20:45:30 [INFO] agent: requesting shutdown
2016/01/06 20:45:30 [INFO] consul: shutting down client
2016/01/06 20:45:30 [WARN] serf: Shutdown without a Leave
2016/01/06 20:45:30 [INFO] agent: shutdown complete
s6-svscanctl: fatal: unable to control /var/run/s6/services: supervisor not listening
[s6-finish] sending all processes the KILL signal and exiting.
@smebberson I'm concerned that this project has come to a halt. We have outdated base images, old ca-certificates, PRs etc. I do not want to fork either.
I propose creation of an organization or something which allows myself to merge, tag and release (I assume Docker Hub is updated automatically with releases). You would still be the owner.
To further improve common goals and communication I would:
The actual consul-ip
script searches consul IPs to join and returns the first one. I think it is a better solution to give more than one. Sometimes that first consul container is down and the other containers cannot join the cluster.
It could be achieved passing a set of well known IPs, or just everything we found under the consul
dns name.
I made the modifications and it works a lot better in situations where the containers go down and up. I will do a pull request for you to review.
If consul-template
in alpine-consul-base
is started the process will be killed after consul
has already gone away leading to errors such as
[ERR] (view) "key(ssl/crt)" store key: error fetching: Get http://127.0.0.1:8500/v1/kv/ssl/crt?stale=&wait=60000ms: dial tcp 127.0.0.1:8500: getsockopt: connection refused
Also it seems that s6-svc -h /var/run/s6/services/consul-template
does not kill the old process. Actually s6-svc -k
has no effect at all. I couldn't kill the process no matter what flag I used.
If I manually kill the consul-template
process then error above goes away. There is consul-template -pid-file
which I tried having and then using /etc/cont-finish.d/#consul-template
(weird name to have it execute before 00-consul
) with kill -9
cat /var/run/consul-template.pid`` but apparently it doesn't die quickly enough as sometimes the error above would still show up.
Any ideas?
Sorry to open an issue on this but I have been racking my brain and googling and can't figure out what I need to change to get the consul service working.
The documentation states: "This container has been setup to automatically connect to a Consul cluster, created with a service name of consul."
I'm trying to bring this up in a standalone vagrant vm:
docker run -d --name consul-test smebberson/alpine-consul-nodejs
...
Status: Downloaded newer image for smebberson/alpine-consul-nodejs:latest
b19f29cc949e2f65cefbb91509602d124bd5abcc3dcd0e03fb6a595cb461c287
docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
b19f29cc949e smebberson/alpine-consul-nodejs "/init" 49 seconds ago Up 48 seconds 53/tcp, 53/udp, 8300-8302/tcp, 8400/tcp, 8500/tcp, 8301-8302/udp consul-test
docker exec -t -i b19f29cc949e sh
/ # dig @localhost google.com
google.com. 299 IN A 216.58.218.206
/ # dig @localhost consul.service.consul
;consul.service.consul. IN A
The only log file with anything in it is /var/log/go-dnsmasq/go-dnsmasq.log
time="2017-01-16T22:08:14Z" level=info msg="Starting go-dnsmasq server 1.0.7"
time="2017-01-16T22:08:14Z" level=info msg="Nameservers: [10.0.2.15:53]"
time="2017-01-16T22:08:14Z" level=info msg="Setting host nameserver to 127.0.0.1"
time="2017-01-16T22:08:14Z" level=info msg="Ready for queries on tcp://127.0.0.1:53"
time="2017-01-16T22:08:14Z" level=info msg="Ready for queries on udp://127.0.0.1:53"
time="2017-01-16T22:09:45Z" level=error msg="[64612] Error looking up literal qname 'consul.service.consul.' with upstreams: read udp 172.17.0.4:39666->172.17.0.4:8600: read: connection refused"
time="2017-01-16T22:09:45Z" level=error msg="[65000] Error looking up literal qname 'consul.service.consul.' with upstreams: read udp 172.17.0.4:44608->172.17.0.4:8600: read: connection refused"
The only thing that appears to be listening is DNS:
/ # netstat -ant |grep LISTEN
tcp 0 0 127.0.0.1:53 0.0.0.0:* LISTEN
What i'm hoping to accomplish is that the container boots, connects to consul.service.ourdomain.consul (consul running on different servers) but every time I try to query that domain it tries to connect to the consul service that should be running locally.
Any advice would be appreciated since I'm not seeing anything in the logs.
With the current method to obtain the container ip: getent hosts $HOSTNAME
or even the new one: dig +short $HOSTNAME
, there is no way to use an overlay network if it exists.
And that's the case if you are using tools like Rancher to manage your containers.
With Rancher you normally ask the rancher metadata to get that IP: curl -s http://rancher-metadata/latest/self/container/primary_ip
.
I think there should be some flexibility to obtain the IP to allow more use cases for the images.
Somehow related, specifying the dns server to obtain the consul ip also breaks the Rancher dns: dig +short consul @127.0.0.11
alpine-nginx specifies specific nginx version while alpine-consul-nginx does not.
Version should be specified to avoid potential breakage caused by version changes. Latest non-breaking versions available should be used for releases.
It would be nice to note package version changes in change log :)
At present releases work as follows:
latest
tag on Docker Hub.1.0.0
(or whatever release it is) tag on Docker Hub.Note: I've turned of "automatic builds when a push happens" in Docker Hub. This is because when I'm pushing for say the alpine-redis
image, the alpine-nodejs
image also builds. This unnecessarily builds containers I don't intend on getting built and it also updates the timestamp on the :latest
tag for other builds which is really frustrating.
Another note: because of the close inheritance of some of the images (which I think is a good thing), you then need to go through and test, bump version numbers, tag and release on Docker Hub downstream containers to take advantage of any updates. If you update alpine-base
it's a big job! I haven't been doing this in one commit either so that I can keep the code in master
as close as possible to the :latest
release of any container. This makes things time consuming and hard to pull in great PRs such as #29.
It's not the greatest process at all.
I've thought of creating a build server and using Docker Hubs web hooks to automate the process a little more, but I just haven't had the time to implement anything and work through those issues above.
Also, I haven't seen much in the way of how to test and verify the state of a Docker image. It would be great to have them running through Travis CI. Especially when updating alpine-base
for example, you could see at an instance what you've broken down-stream.
I'd like to create a discussion around this to see if we can streamline these processes a little.
I would recommend adding consul-template to alpine-consul-base as it is integral part of Consul itself.
/etc/consul-template/templates/
./etc/consul-template/conf.d/
defaulting to consul on 127.0.0.1:8500
./etc/consul-template/conf.d/
./etc/consul-template/conf.d/
contains template configurations. No need to fire up service that does nothing.This setup should provide 100% automatic way to enable consul-template when proper configuration exists.
user-consul-nginx-nodejs has nginx default.conf which should be updated to utilize consul-template with s6 service definition.
If this proposition is supported I can work on the pull request.
Images based on alpine-consul-base do not seem to specify s6 start order potentially leading to unwanted behavior. If I have understood s6 correctly there is no actual support for setting an order but there is a concept of dependencies, see Readiness notification and dependency management and more specifically The s6-svwait program.
Example use case:
Scenario: I want all services to be discoverable through Consul server.
Given I use `alpine-consul` for service discovery
And `alpine-consul-nginx` for web server
When I start `alpine-consul-nginx`
Then Service `nginx` should be discoverable and `alive` via Consul
However, as it is today there's no guarantee that nginx indeed is up before Consul agent reports "nginx" service. While default Consul configuration for nginx does have health check against /ping, nginx might not even start up. Therefore:
Regarding 1. I do acknowledge that not reporting service can lead to a situation where health monitoring breaks if it depends on information provided by Consul. Perhaps instead of exit in my example below we could do a log entry instead and proceed with exec consul
as before.
Change proposals:
Note: example below does not contain -t for timeout, probably should be defined to 10s for nginx.
# wait for nginx to be up
s6-svwait -u /var/run/s6/services/nginx/
if [ $? -ne 0 ]; then
exit $?;
fi
exec consul ...
At least README.md for alpine-nginx instructs the following to restart nginx:
s6-svc -h /etc/services.d/nginx
However this results in:
s6-svc: fatal: unable to control /etc/services.d/nginx: No such file or directory
The correct command to restart nginx is:
s6-svc -h /var/run/s6/services/nginx/
This results in nginx worker process to be restarted.
We should support Consul -advertise-wan and translate_wan_addrs through ENV.
Perhaps introduce ENV CONSUL_ADVERTISE_WAN=127.0.0.1
and ENV CONSUL_TRANSLATE_WAN_ADDRS=true
which then are checked in alpine-consul
/etc/services.d/consul/run
and used if present.
Hi,
Alpine has recently added a package for consul, its in testing right now:
http://wiki.alpinelinux.org/wiki/User_talk:Jch/consul
So, this might be something you would want to consider.
Hello!
Is that normal behaviour? what after creating container by command
docker run -d smebberson/docker-alpine
and then getting shell by
docker exec -it [container_id] sh
and send ping 172.17.0.1
i've got a error message
ping: permission denied (are you root?)
how can i solve this problem?
P.S. docker version 1.12.1
thank you
Hi,
Similar to @rbellamy in issue #62. I'm keen to see a alpine + consul + openjdk image for running jvm based microservices.
I think the s6 overlay + consul plumbing in these images is really neat and hope these images have a long term future. I would be willing to help out on the openjdk image maitenance.
I'll raise a PR that basically takes @rbellamy 's openjdk PR and layers it on top of the alpine-consul-base image.
We have Consul version check enabled, see https://consul.io/docs/agent/options.html#disable_update_check. While not a big deal I think this should be disabled as version management happens through images. No need to open up connection for nothing.
Steps:
docker build -t aneeshd16/alpine-base .
docker run aneeshd16/alpine-base
[s6-init] making user provided files available at /var/run/s6/etc...exited 0.
[s6-init] ensuring user provided files have correct perms...exited 0.
[fix-attrs.d] applying ownership & permissions fixes...
[fix-attrs.d] done.
[cont-init.d] executing container initialization scripts...
[cont-init.d] 30-resolver: executing...
: No such file or directory sh
[cont-init.d] 30-resolver: exited 111.
[cont-init.d] 40-resolver: executing...
: No such file or directory sh
[cont-init.d] 40-resolver: exited 111.
[cont-init.d] done.
[services.d] starting services
[services.d] done.
: No such file or directory sh
: No such file or directory sh
: No such file or directory sh
: No such file or directory sh
: No such file or directory sh
: No such file or directory sh
<continues>
Compare this to docker run smebberson/alpine-base
[s6-init] making user provided files available at /var/run/s6/etc...exited 0.
[s6-init] ensuring user provided files have correct perms...exited 0.
[fix-attrs.d] applying ownership & permissions fixes...
[fix-attrs.d] done.
[cont-init.d] executing container initialization scripts...
[cont-init.d] 30-resolver: executing...
[cont-init.d] 30-resolver: exited 0.
[cont-init.d] 40-resolver: executing...
[cont-init.d] 40-resolver: exited 0.
[cont-init.d] done.
[services.d] starting services
[services.d] done.
Thanks in advance!
Suggesting that we introduce https://www.nomadproject.io/ into the base images to complement Consul more than anything perhaps.
alpine-consul-nomad
using alpine-consul-base
(as server)user-consul-nomad
(as server)user-consul-nomad-agent
(as agent)So exactly like alpine-consul
.
I left alpine-nomad
out on purpose as I do not see a use-case for it but if it should be added then it's pretty much copy & paste minus consul.
My real world use-case is that I need to execute something on servers but I do not have direct access to those servers and I have no idea which server is available.
If Nomad sounds good then I can work on the PR.
Hi,
What is the /vagrant/alpine-base
in the /alpine-base/build script? There's no alpine-base
folder in the /vagrant
path of this repo, so I am confused as to its purpose.
Thanks
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
714ecf80a71b smebberson/alpine-redis "/init" 39 minutes ago Up 39 minutes 6379/tcp redis
$ docker exec -it redis bash
exec: "bash": executable file not found in $PATH
Related to #24.
Could we add stubzones support in alpine-consul
?
According to https://github.com/janeczku/go-dnsmasq Use different nameservers for specific domains domain[,domain]/host[:port]
so I'm thinking the following might work for https://github.com/smebberson/docker-alpine/blob/master/alpine-base/root/etc/services.d/resolver/run:
exec go-dnsmasq --default-resolver --append-search-domains --hostsfile=/etc/hosts --stubzones=.consul/127.0.0.1:8600 >> $GO_DNSMASQ_LOG_FILE 2>&1
With updated Consul we can now use built-in http
and tcp
checks, see https://www.consul.io/docs/agent/checks.html. While certainly using custom "pings" is not wrong it is customization that should and could be avoided.
Example of HTTP check:
{
"check": {
"id": "api",
"name": "HTTP API on port 5000",
"http": "http://localhost:5000/health",
"interval": "10s"
}
}
Below are images with HTTP check. Perhaps we could just do http://localhost/
as that typically out of the box works?
Does go-dnsmasq
need to be run as root? I do get that it needs to be able to modify -rw-r--r-- 1 root root 239 Mar 14 21:57 /etc/resolv.conf
but that could be taken care of with a new group?
I can work on PR if root is not required.
It turns out, DNS resolution on Alpine Linux isn't complete.
I've recently run into this issue while moving away from IP-based services connection to hostname-based services connection (i.e. connecting to Redis on another container within Node.js), on Tutum.
I'm thinking of using go-dnsmasq to resolve the problem, as is demonstrated here: https://github.com/janeczku/docker-alpine-kubernetes/tree/master/versions/3.2. This would go into alpine-base
.
If you have multiple IPs in your container, it's necessary to add -advertise $BIND -client 0.0.0.0
to the run
script to be able to connect to it.
This modification should not cause any side effect if the containers has only one address, but I think you should test it just in case.
Using smebberson/alpine-consul-base
(latest) as base and then executing su build -c ""
leads to
I've also done apk upgrade --update
which lead to BusyBox to upgrade busybox-1.24.2-r9.
So here's everything
/ # su build -c ""
halt: unrecognized option: c
BusyBox v1.24.2 (2016-06-23 08:49:16 GMT) multi-call binary.
Usage: halt [-d DELAY] [-n] [-f]
Halt the system
-d SEC Delay interval
-n Do not sync
-f Force (don't go through init)
I'm not quite sure if this is a smebberson/docker-alpine
problem or problem with BusyBox. It didn't use to happen but now after using latest base images I'm getting this and cannot seem to resolve it.
Consul 0.7.0 is out now and we should update to it asap :) I went through the changes and while it does have breakage and new features I did not see anything that affects this project.
My biggest pain point which should be resolved now was the inability to decide on cluster leader after failures.
I also recommend we resolve #50 by merging those changes in at the same time.
How would I go about choosing a newer version of nginx using smebberson/alpine-nginx-nodejs?
New users are created for alpine-consul
and alpine-rabbitmq
. While these accounts do work su - consul
resulted in su: can't execute ':60:60:mysql:/var/lib/mysq': No such file or directory
, see
. So new users defaulted to mysql for whatever reason. While I do not want to su - consul
for any legitimate reason having a wrong shell is still a problem. This can be easily fixed by adding -s /bin/sh
to the adduser
command.
When using a storage back end in docker that does not support extended file attributes, the go-dnsmasq resolver is not able to bind to port 53 (or any other port < 1024).
Basically the line https://github.com/smebberson/docker-alpine/blob/master/alpine-base/Dockerfile#L18 has no effect when such a storage back end (aufs, btrfs - see moby/moby#30557) is used, so DNS fails in the container.
I suggest a simple workaround in https://github.com/smebberson/docker-alpine/blob/master/alpine-base/root/etc/services.d/resolver/run like this:
#!/usr/bin/with-contenv sh
RUNAS="go-dnsmasq"
setcap -v CAP_NET_BIND_SERVICE=+eip /bin/go-dnsmasq
status=$?
if [ !$status ];
then
RUNAS="root"
fi
s6-setuidgid ${RUNAS} go-dnsmasq --default-resolver --ndots "1" --fwd-ndots "0" --hostsfile=/etc/hosts >> $GO_DNSMASQ_LOG_FILE 2>&1
This makes go-dnsmasq run as root (instead of the go-dnsmasq user) if the capability is not set on the binary (which is the case when using a back end that does not support extended file attributes.
alpine-consul
should expose the default Consul HTTP API port 8500/tcp.
A side note: alpine-consul-ui
is obsolete once we do #19 as the ui is now part of the Consul itself. The ui would be accessible via http://localhost:8500/ui.
When spinning up the complete example in examples/complete the consul cluster fails to bootstrap with the latest build (3.1.1). If I change the version of alpine-consul back to the previous version 'alpine-consul:3.1.0' things appear to work.
It looks like the dns query is not working in the latest.
consul_1 | * Failed to resolve ;;: lookup ;;: invalid domain name
consul_1 | 2017/01/19 19:40:11 [WARN] agent: Join failed: , retrying in 30s
consul_1 | 2017/01/19 19:40:14 [ERR] agent: coordinate update error: No cluster leader
consul_3 | 2017/01/19 19:40:17 [ERR] agent: failed to sync remote state: No cluster leader
consul_2 | 2017/01/19 19:40:18 [ERR] agent: coordinate update error: No cluster leader
consul_3 | 2017/01/19 19:40:23 [ERR] agent: coordinate update error: No cluster leader
consul_2 | 2017/01/19 19:40:34 [ERR] agent: failed to sync remote state: No cluster leader
consul_1 | 2017/01/19 19:40:41 [INFO] agent: (LAN) joining: [;;]
consul_1 | 2017/01/19 19:40:41 [WARN] memberlist: Failed to resolve ;;: lookup ;;: invalid domain name
consul_1 | 2017/01/19 19:40:41 [INFO] agent: (LAN) joined: 0 Err: 1 error(s) occurred:
Something else that looks suspect. In the logs for dnsmasq it looks like the service is being continually restarted.
/var/log/go-dnsmasq/go-dnsmasq.log
time="2017-01-19T19:38:42Z" level=info msg="Starting go-dnsmasq server 1.0.7"
time="2017-01-19T19:38:42Z" level=info msg="Nameservers: [127.0.0.11:53]"
time="2017-01-19T19:38:42Z" level=info msg="Setting host nameserver to 127.0.0.1"
time="2017-01-19T19:38:42Z" level=info msg="Ready for queries on tcp://127.0.0.1:53"
time="2017-01-19T19:38:42Z" level=info msg="Ready for queries on udp://127.0.0.1:53"
time="2017-01-19T19:38:42Z" level=fatal msg="listen udp 127.0.0.1:53: bind: permission denied"
time="2017-01-19T19:38:43Z" level=info msg="Starting go-dnsmasq server 1.0.7"
time="2017-01-19T19:38:43Z" level=info msg="Nameservers: [127.0.0.11:53]"
time="2017-01-19T19:38:43Z" level=info msg="Setting host nameserver to 127.0.0.1"
time="2017-01-19T19:38:43Z" level=info msg="Ready for queries on tcp://127.0.0.1:53"
time="2017-01-19T19:38:43Z" level=info msg="Ready for queries on udp://127.0.0.1:53"
time="2017-01-19T19:38:43Z" level=fatal msg="listen udp 127.0.0.1:53: bind: permission denied"
Load average: 0.00 0.01 0.00 1/210 1697
PID PPID USER STAT VSZ %VSZ CPU %CPU COMMAND
260 242 go-dnsma S 10576 1% 0 0% go-dnsmasq --default-resolver --ndots 1 --fwd-ndots 0 --stubzones=.consul/172.17.0.2:8600 --hostsfile=/etc/hosts
590 0 root S 6208 0% 3 0% bash
242 230 root S 1516 0% 2 0% sh ./run
1697 590 root R 1516 0% 0 0% top
259 243 root S 1512 0% 0 0% sh /usr/bin/consul-available
243 231 root S 1512 0% 3 0% sh ./run
1696 259 root S 1508 0% 2 0% sleep 1
1 0 root S 192 0% 3 0% s6-svscan -t0 /var/run/s6/services
231 1 root S 192 0% 2 0% s6-supervise consul
31 1 root S 192 0% 3 0% s6-supervise s6-fdholderd
230 1 root S 192 0% 0 0% s6-supervise resolver
bash-4.3# ls *find
container-find find
bash-4.3# sh -x container-find
dig no record
See https://github.com/hashicorp/consul/blob/master/CHANGELOG.md and https://www.consul.io/docs/upgrade-specific.html.
Couple highlights important for us:
alpine-consul-statsd
or similar coming)Some breakage will happen:
Many, if not all, files contain CRLF instead of LF. While not a huge deal shell scripts are picky about this and currently throw : No such file or directory sh
which can be seen with user-consul-nginx-nodejs. This does not seem to affect operations but are a concern and can be fixed by simple conversion.
I recommend all files to be updated from CRLF to LF.
We need to move toward a standardised approach to logging. Some of the images do it differently at the moment.
The main factor is, should everything be splurted to stdout
and stderr
? We need to have a way to turn it off however.
I think in all instances, they should also go to file, so that any file based aggregation can take place.
Any other thoughts?
in alpine-consul-apache image no install consul-template ,^_^ @smebberson
References #36.
Roadmap
See Wikipedia explanation.
Purpose of a roadmap is to set goals and expectations. Roadmap sets focus and drives excitement. Roadmap does not dictate every single change but rather lists known immediate upcoming changes and tries to define more distant changes - a wishlist in its simplest form. After a release roadmap should almost from word to word be in changelog.
Current state of docker-alpine
project is that there is no roadmap. Releases are sporadic, which is fine and understandable considering this is a community project, but more problematic is the what is needed for a release to happen. While a release could happen, especially with automation, after every single change to set goals and expectations might be preferable. Having a roadmap should also alleviate pressure from release master as there are now more clear definition of when to release.
I propose GitHub milestones feature to be used. All issues are to be triaged and then moved to a milestone, comments requested or closed. This process should be as lightweight and quick as possible.
Changelog
Example that I like: https://github.com/hashicorp/consul/blob/master/CHANGELOG.md.
I propose a simple changelog based on roadmap (or more specifically milestones). If changes are issues and issues have meaningful context (so far they have) then I do not see a point to write again what was changed, why etc.
Note: docker-alpine
project due to it's multiple release numbers (one per image) does pose a challenge. In order to simply versioning moving to one release number for all, even if no change is done, might be in order. Then in changelog we could just note
NO CHANGE LIST:
alpine-nginx-nodejs
alpine-nginx
I do acknowledge that doing a release with no changes is curious thing, but I believe it simplifying things outweights the negative.
I recommend apk upgrade --update
to be executed on every image. For example libcrypto
, libssl
and bind
are out of date. While security is responsibility of user providing latest (at the time of build at least) would be good practice.
Also vaguely related: "Clair is an open source project for the static analysis of vulnerabilities in appc and docker containers." quay/clair#12.
/tmp/s6-overlay.tar.gz should be deleted once extracted.
The alpine-nodejs-v6.0.0 tag points at v6.3.0.
There's greater granularity in the v4.x and v5.x release trains for nodejs, so I'm assuming this is just an oversight?
alpine-consul-nginx-nodejs and alpine-nginx-nodejs should provide nodejs user. user-consul-nginx-nodejs and user-nginx-nodejs then both use this user in s6 run scripts. Right now node server is running as root.
https://github.com/smebberson/docker-alpine/blob/master/alpine-consul-ui/root/etc/cont-finish.d/00-consul and https://github.com/smebberson/docker-alpine/blob/master/alpine-consul-ui/root/etc/services.d/consul/run seem to be unnecessary as the base image contains exact same files.
alpine-base example links to ubuntu-base.
alpine-consul-nginx-nodejs example links to Github 404 page. New location is here.
There may be more, I've just come across these two while reading through the repo.
---> Running in 00612f21dd21
fetch http://dl-cdn.alpinelinux.org/alpine/v3.4/main/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.4/community/x86_64/APKINDEX.tar.gz
fetch http://dl-4.alpinelinux.org/alpine/v3.2/main/x86_64/APKINDEX.tar.gz
ERROR: unsatisfiable constraints:
nginx-1.10.1-r1:
breaks: world[nginx=1.8.1-r0]
Problem seems to be with mixture of 3.4 and 3.2. The old repository should be dropped and main should be used instead I think.
A side note and a question: Nginx has stream_module that needs to be built-in for it to be supported. Reason why this matters is if you use Nginx for routing (i.e. database TCP connection from dc1 to dc2). I have custom image where I use Nginx from aports to build it in. Would there be interest to change alpine-nginx
and alpine-consul-nginx
to this custom build to support Nginx stream?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.