Git Product home page Git Product logo

activemq-artemis-docker's Issues

Unable to connect using AWS ACS Container Service

I cannot connect to the image using the 8161 port.

For example i telnet the ip in the 8161 but i get connection refused.

I use AWS ACS Container Service with the minimum configuration:

{
"requiresAttributes": [
{
"value": null,
"name": "com.amazonaws.ecs.capability.logging-driver.awslogs",
"targetId": null,
"targetType": null
},
{
"value": null,
"name": "com.amazonaws.ecs.capability.docker-remote-api.1.19",
"targetId": null,
"targetType": null
}
],
"taskDefinitionArn": "arn:aws:ecs:eu-west-2:893749253116:task-definition/ventureCloud-messaging-qa:2",
"networkMode": "bridge",
"status": "ACTIVE",
"revision": 2,
"taskRoleArn": null,
"containerDefinitions": [
{
"volumesFrom": [],
"memory": 512,
"extraHosts": null,
"dnsServers": null,
"disableNetworking": null,
"dnsSearchDomains": null,
"portMappings": [
{
"hostPort": 61613,
"containerPort": 61613,
"protocol": "tcp"
},
{
"hostPort": 8161,
"containerPort": 8161,
"protocol": "tcp"
}
],
"hostname": null,
"essential": true,
"entryPoint": null,
"mountPoints": [],
"name": "VentureCloudMessageBrokerContainer",
"ulimits": null,
"dockerSecurityOptions": null,
"environment": [],
"links": null,
"workingDirectory": null,
"readonlyRootFilesystem": null,
"image": "vromero/activemq-artemis",
"command": null,
"user": null,
"dockerLabels": null,
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": "vc-api-qa",
"awslogs-region": "eu-west-2",
"awslogs-stream-prefix": "qamessaging"
}
},
"cpu": 1,
"privileged": null,
"memoryReservation": null
}
],
"placementConstraints": [],
"volumes": [],
"family": "ventureCloud-messaging-qa"
}

User credential environment vars not working

The replacement of username/passwords in artemis-users.properties is not working since the passwords are not stored in plain text. The problematic part is:

sed -i "s/artemis=simetraehcapa/$ARTEMIS_USERNAME=$ARTEMIS_PASSWORD/g" ../etc/artemis-users.properties

Entrypoint Script Bug

There is a bug in the entrypoint script that prevents the ARTEMIS_USERNAME environment variable from being applied to the artemis-roles.properties file.

Line 9 in docker-entrypoint.sh:
sed -i "s/apollo=amq/$ARTEMIS_USERNAME=amq/g" ../etc/artemis-roles.properties
should be:
sed -i "s/amq=apollo/amq=$ARTEMIS_USERNAME/g" ../etc/artemis-roles.properties

Also, the documentation says to use the environment variables ACTIVEMQ_MIN_MEMORY and ACTIVEMQ_MAX_MEMORY, but the script actually uses ARTEMIS_MIN_MEMORY and ARTEMIS_MAX_MEMORY.

Getting "The command '.....' returned a non-zero code: 4 ERROR

I am deploying the code and get the ERROR :

returned a non-zero code: 4

Full log of image build is :

[root@minion activemq-artemis-docker-master]# docker build -f Dockerfile --tag=test-0.1 . --no-cache
Sending build context to Docker daemon 38.91 kB
Step 1 : FROM openjdk:8
 ---> 891c9734d5ab
Step 2 : MAINTAINER Victor Romero <[email protected]>
 ---> Running in e22befa8824e
 ---> 133f398ae063
Removing intermediate container e22befa8824e
Step 3 : RUN groupadd -r artemis && useradd -r -g artemis artemis
 ---> Running in a106367e4bfc
 ---> a374be22e9bd
Removing intermediate container a106367e4bfc
Step 4 : RUN apt-get -qq -o=Dpkg::Use-Pty=0 update && apt-get -qq -o=Dpkg::Use-Pty=0 upgrade -y &&   apt-get -qq -o=Dpkg::Use-Pty=0 install -y --no-install-recommends libaio1 xmlstarlet jq &&   rm -rf /var/lib/apt/lists/*
 ---> Running in 6e72be76c801
debconf: delaying package configuration, since apt-utils is not installed
(Reading database ... 23468 files and directories currently installed.)
Preparing to unpack .../curl_7.52.1-5+deb9u5_amd64.deb ...
Unpacking curl (7.52.1-5+deb9u5) over (7.52.1-5+deb9u4) ...
Preparing to unpack .../libcurl3_7.52.1-5+deb9u5_amd64.deb ...
Unpacking libcurl3:amd64 (7.52.1-5+deb9u5) over (7.52.1-5+deb9u4) ...
Preparing to unpack .../libcurl3-gnutls_7.52.1-5+deb9u5_amd64.deb ...
Unpacking libcurl3-gnutls:amd64 (7.52.1-5+deb9u5) over (7.52.1-5+deb9u4) ...
Setting up libcurl3:amd64 (7.52.1-5+deb9u5) ...
Setting up libcurl3-gnutls:amd64 (7.52.1-5+deb9u5) ...
Processing triggers for libc-bin (2.24-11+deb9u3) ...
Setting up curl (7.52.1-5+deb9u5) ...
debconf: delaying package configuration, since apt-utils is not installed
Selecting previously unselected package libonig4:amd64.
(Reading database ... 23468 files and directories currently installed.)
Preparing to unpack .../0-libonig4_6.1.3-2_amd64.deb ...
Unpacking libonig4:amd64 (6.1.3-2) ...
Selecting previously unselected package libjq1:amd64.
Preparing to unpack .../1-libjq1_1.5+dfsg-1.3_amd64.deb ...
Unpacking libjq1:amd64 (1.5+dfsg-1.3) ...
Selecting previously unselected package jq.
Preparing to unpack .../2-jq_1.5+dfsg-1.3_amd64.deb ...
Unpacking jq (1.5+dfsg-1.3) ...
Selecting previously unselected package libaio1:amd64.
Preparing to unpack .../3-libaio1_0.3.110-3_amd64.deb ...
Unpacking libaio1:amd64 (0.3.110-3) ...
Selecting previously unselected package libxslt1.1:amd64.
Preparing to unpack .../4-libxslt1.1_1.1.29-2.1_amd64.deb ...
Unpacking libxslt1.1:amd64 (1.1.29-2.1) ...
Selecting previously unselected package xmlstarlet.
Preparing to unpack .../5-xmlstarlet_1.6.1-2_amd64.deb ...
Unpacking xmlstarlet (1.6.1-2) ...
Setting up libonig4:amd64 (6.1.3-2) ...
Setting up libxslt1.1:amd64 (1.1.29-2.1) ...
Setting up libjq1:amd64 (1.5+dfsg-1.3) ...
Processing triggers for libc-bin (2.24-11+deb9u3) ...
Setting up libaio1:amd64 (0.3.110-3) ...
Setting up jq (1.5+dfsg-1.3) ...
Setting up xmlstarlet (1.6.1-2) ...
Processing triggers for libc-bin (2.24-11+deb9u3) ...
 ---> 64cb6a341d65
Removing intermediate container 6e72be76c801
Step 5 : ENV GOSU_VERSION 1.9
 ---> Running in 299b6a5c8c89
 ---> 9207f44a232b
Removing intermediate container 299b6a5c8c89
Step 6 : RUN set -x     && apt-get update && apt-get install -y --no-install-recommends ca-certificates wget && rm -rf /var/lib/apt/lists/*     && dpkgArch="$(dpkg --print-architecture | awk -F- '{ print $NF }')"     && wget -O /usr/local/bin/gosu "https://github.com/tianon/gosu/releases/download/$GOSU_VERSION/gosu-$dpkgArch"     && wget -O /usr/local/bin/gosu.asc "https://github.com/tianon/gosu/releases/download/$GOSU_VERSION/gosu-$dpkgArch.asc"     && export GNUPGHOME="$(mktemp -d)"     && (gpg --keyserver ha.pool.sks-keyservers.net --recv-keys B42F6819007F00F88E364FD4036A9C25BF357DD4 || gpg --keyserver keyserver.ubuntu.com --recv-keys B42F6819007F00F88E364FD4036A9C25BF357DD4)     && gpg --batch --verify /usr/local/bin/gosu.asc /usr/local/bin/gosu     && rm -rf "$GNUPGHOME" /usr/local/bin/gosu.asc     && chmod +x /usr/local/bin/gosu     && gosu nobody true
 ---> Running in 2984a5fdc2a1
+ apt-get update
Ign:1 http://deb.debian.org/debian stretch InRelease
Get:2 http://security.debian.org stretch/updates InRelease [63.0 kB]
Get:3 http://deb.debian.org/debian stretch-updates InRelease [91.0 kB]
Get:4 http://deb.debian.org/debian stretch Release [118 kB]
Get:5 http://deb.debian.org/debian stretch Release.gpg [2434 B]
Get:6 http://security.debian.org stretch/updates/main amd64 Packages [453 kB]
Get:7 http://deb.debian.org/debian stretch-updates/main amd64 Packages [8431 B]
Get:8 http://deb.debian.org/debian stretch/main amd64 Packages [9530 kB]
Fetched 10.3 MB in 8s (1268 kB/s)
Reading package lists...
+ apt-get install -y --no-install-recommends ca-certificates wget
Reading package lists...
Building dependency tree...
Reading state information...
ca-certificates is already the newest version (20161130+nmu1).
wget is already the newest version (1.18-5+deb9u1).
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
+ rm -rf /var/lib/apt/lists/deb.debian.org_debian_dists_stretch-updates_InRelease /var/lib/apt/lists/deb.debian.org_debian_dists_stretch-updates_main_binary-amd64_Packages.lz4 /var/lib/apt/lists/deb.debian.org_debian_dists_stretch_Release /var/lib/apt/lists/deb.debian.org_debian_dists_stretch_Release.gpg /var/lib/apt/lists/deb.debian.org_debian_dists_stretch_main_binary-amd64_Packages.lz4 /var/lib/apt/lists/lock /var/lib/apt/lists/partial /var/lib/apt/lists/security.debian.org_dists_stretch_updates_InRelease /var/lib/apt/lists/security.debian.org_dists_stretch_updates_main_binary-amd64_Packages.lz4
+ awk -F- { print $NF }
+ dpkg --print-architecture
+ dpkgArch=amd64
+ wget -O /usr/local/bin/gosu https://github.com/tianon/gosu/releases/download/1.9/gosu-amd64
--2018-03-23 13:58:47--  https://github.com/tianon/gosu/releases/download/1.9/gosu-amd64
Resolving github.com (github.com)... 192.30.253.113, 192.30.253.112
Connecting to github.com (github.com)|192.30.253.113|:443... connected.
HTTP request sent, awaiting response... 302 Found
Location: https://github-production-release-asset-2e65be.s3.amazonaws.com/19708981/5812fd6c-16fa-11e6-9847-985f5f7d9917?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIWNJYAX4CSVEH53A%2F20180323%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20180323T135848Z&X-Amz-Expires=300&X-Amz-Signature=e8b1e3a9245c4c8e42caa42ee5366bf96263ecd408d908778523ce4096701f9c&X-Amz-SignedHeaders=host&actor_id=0&response-content-disposition=attachment%3B%20filename%3Dgosu-amd64&response-content-type=application%2Foctet-stream [following]
--2018-03-23 13:58:48--  https://github-production-release-asset-2e65be.s3.amazonaws.com/19708981/5812fd6c-16fa-11e6-9847-985f5f7d9917?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIWNJYAX4CSVEH53A%2F20180323%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20180323T135848Z&X-Amz-Expires=300&X-Amz-Signature=e8b1e3a9245c4c8e42caa42ee5366bf96263ecd408d908778523ce4096701f9c&X-Amz-SignedHeaders=host&actor_id=0&response-content-disposition=attachment%3B%20filename%3Dgosu-amd64&response-content-type=application%2Foctet-stream
Resolving github-production-release-asset-2e65be.s3.amazonaws.com (github-production-release-asset-2e65be.s3.amazonaws.com)... 54.231.82.74
Connecting to github-production-release-asset-2e65be.s3.amazonaws.com (github-production-release-asset-2e65be.s3.amazonaws.com)|54.231.82.74|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 1804608 (1.7M) [application/octet-stream]
Saving to: ‘/usr/local/bin/gosu’

     0K .......... .......... .......... .......... ..........  2%  198K 9s
    50K .......... .......... .......... .......... ..........  5%  457K 6s
   100K .......... .......... .......... .......... ..........  8%  761K 5s
   150K .......... .......... .......... .......... .......... 11%  772K 4s
   200K .......... .......... .......... .......... .......... 14% 3.57M 3s
   250K .......... .......... .......... .......... .......... 17%  794K 3s
   300K .......... .......... .......... .......... .......... 19% 2.64M 2s
   350K .......... .......... .......... .......... .......... 22%  947K 2s
   400K .......... .......... .......... .......... .......... 25% 2.27M 2s
   450K .......... .......... .......... .......... .......... 28% 1.54M 2s
   500K .......... .......... .......... .......... .......... 31% 2.46M 2s
   550K .......... .......... .......... .......... .......... 34% 2.16M 1s
   600K .......... .......... .......... .......... .......... 36% 1.77M 1s
   650K .......... .......... .......... .......... .......... 39% 2.07M 1s
   700K .......... .......... .......... .......... .......... 42% 2.40M 1s
   750K .......... .......... .......... .......... .......... 45% 4.14M 1s
   800K .......... .......... .......... .......... .......... 48% 2.13M 1s
   850K .......... .......... .......... .......... .......... 51% 3.21M 1s
   900K .......... .......... .......... .......... .......... 53% 2.45M 1s
   950K .......... .......... .......... .......... .......... 56% 1.74M 1s
  1000K .......... .......... .......... .......... .......... 59% 2.54M 1s
  1050K .......... .......... .......... .......... .......... 62%  403K 1s
  1100K .......... .......... .......... .......... .......... 65% 3.45M 1s
  1150K .......... .......... .......... .......... .......... 68% 1.09M 1s
  1200K .......... .......... .......... .......... .......... 70% 1.14M 0s
  1250K .......... .......... .......... .......... .......... 73% 4.37M 0s
  1300K .......... .......... .......... .......... .......... 76% 3.49M 0s
  1350K .......... .......... .......... .......... .......... 79% 4.27M 0s
  1400K .......... .......... .......... .......... .......... 82% 57.5M 0s
  1450K .......... .......... .......... .......... .......... 85% 71.0M 0s
  1500K .......... .......... .......... .......... .......... 87% 58.0M 0s
  1550K .......... .......... .......... .......... .......... 90% 54.9M 0s
  1600K .......... .......... .......... .......... .......... 93% 1.09M 0s
  1650K .......... .......... .......... .......... .......... 96% 2.68M 0s
  1700K .......... .......... .......... .......... .......... 99% 51.0M 0s
  1750K .......... ..                                         100% 94.4M=1.3s

2018-03-23 13:58:49 (1.37 MB/s) - ‘/usr/local/bin/gosu’ saved [1804608/1804608]

+ wget -O /usr/local/bin/gosu.asc https://github.com/tianon/gosu/releases/download/1.9/gosu-amd64.asc
--2018-03-23 13:58:49--  https://github.com/tianon/gosu/releases/download/1.9/gosu-amd64.asc
Resolving github.com (github.com)... 192.30.253.113, 192.30.253.112
Connecting to github.com (github.com)|192.30.253.113|:443... connected.
HTTP request sent, awaiting response... 302 Found
Location: https://github-production-release-asset-2e65be.s3.amazonaws.com/19708981/5828570c-16fa-11e6-9e66-0433eb15bcd0?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIWNJYAX4CSVEH53A%2F20180323%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20180323T135850Z&X-Amz-Expires=300&X-Amz-Signature=5b7358312da6f10b29de64de67fce5dc820f272d14b12db26413263b0823bc2e&X-Amz-SignedHeaders=host&actor_id=0&response-content-disposition=attachment%3B%20filename%3Dgosu-amd64.asc&response-content-type=application%2Foctet-stream [following]
--2018-03-23 13:58:50--  https://github-production-release-asset-2e65be.s3.amazonaws.com/19708981/5828570c-16fa-11e6-9e66-0433eb15bcd0?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIWNJYAX4CSVEH53A%2F20180323%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20180323T135850Z&X-Amz-Expires=300&X-Amz-Signature=5b7358312da6f10b29de64de67fce5dc820f272d14b12db26413263b0823bc2e&X-Amz-SignedHeaders=host&actor_id=0&response-content-disposition=attachment%3B%20filename%3Dgosu-amd64.asc&response-content-type=application%2Foctet-stream
Resolving github-production-release-asset-2e65be.s3.amazonaws.com (github-production-release-asset-2e65be.s3.amazonaws.com)... 54.231.82.74
Connecting to github-production-release-asset-2e65be.s3.amazonaws.com (github-production-release-asset-2e65be.s3.amazonaws.com)|54.231.82.74|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 543 [application/octet-stream]
Saving to: ‘/usr/local/bin/gosu.asc’

     0K                                                       100%  323K=0.002s

2018-03-23 13:58:50 (323 KB/s) - ‘/usr/local/bin/gosu.asc’ saved [543/543]

+ mktemp -d
+ export GNUPGHOME=/tmp/tmp.wv5ldfVeuv
+ gpg --keyserver ha.pool.sks-keyservers.net --recv-keys B42F6819007F00F88E364FD4036A9C25BF357DD4
gpg: keybox '/tmp/tmp.wv5ldfVeuv/pubring.kbx' created
gpg: keyserver receive failed: Server indicated a failure
+ gpg --keyserver keyserver.ubuntu.com --recv-keys B42F6819007F00F88E364FD4036A9C25BF357DD4
gpg: /tmp/tmp.wv5ldfVeuv/trustdb.gpg: trustdb created
gpg: key 036A9C25BF357DD4: public key "Tianon Gravi <[email protected]>" imported
gpg: no ultimately trusted keys found
gpg: Total number processed: 1
gpg:               imported: 1
+ gpg --batch --verify /usr/local/bin/gosu.asc /usr/local/bin/gosu
gpg: Signature made Wed May 11 04:56:44 2016 UTC
gpg:                using RSA key 036A9C25BF357DD4
gpg: Good signature from "Tianon Gravi <[email protected]>" [unknown]
gpg:                 aka "Tianon Gravi <[email protected]>" [unknown]
gpg:                 aka "Tianon Gravi <[email protected]>" [unknown]
gpg:                 aka "Andrew Page (tianon) <[email protected]>" [unknown]
gpg:                 aka "Andrew Page (tianon) <[email protected]>" [unknown]
gpg:                 aka "Andrew Page (Tianon Gravi) <[email protected]>" [unknown]
gpg:                 aka "Tianon Gravi (Andrew Page) <[email protected]>" [unknown]
gpg: WARNING: This key is not certified with a trusted signature!
gpg:          There is no indication that the signature belongs to the owner.
Primary key fingerprint: B42F 6819 007F 00F8 8E36  4FD4 036A 9C25 BF35 7DD4
+ rm -rf /tmp/tmp.wv5ldfVeuv /usr/local/bin/gosu.asc
+ chmod +x /usr/local/bin/gosu
+ gosu nobody true
 ---> edba455d1992
Removing intermediate container 2984a5fdc2a1
Step 7 : ENV ACTIVEMQ_ARTEMIS_VERSION 2.4.0
 ---> Running in f6d0c1afd738
 ---> e73684744663
Removing intermediate container f6d0c1afd738
Step 8 : RUN cd /opt && wget -q https://repository.apache.org/content/repositories/releases/org/apache/activemq/apache-artemis/${ACTIVEMQ_ARTEMIS_VERSION}/apache-artemis-${ACTIVEMQ_ARTEMIS_VERSION}-bin.tar.gz &&   wget -q https://repository.apache.org/content/repositories/releases/org/apache/activemq/apache-artemis/${ACTIVEMQ_ARTEMIS_VERSION}/apache-artemis-${ACTIVEMQ_ARTEMIS_VERSION}-bin.tar.gz.asc &&   wget -q http://apache.org/dist/activemq/KEYS &&   gpg --import KEYS &&   gpg apache-artemis-${ACTIVEMQ_ARTEMIS_VERSION}-bin.tar.gz.asc &&   tar xfz apache-artemis-${ACTIVEMQ_ARTEMIS_VERSION}-bin.tar.gz &&   ln -s apache-artemis-${ACTIVEMQ_ARTEMIS_VERSION} apache-artemis &&   rm -f apache-artemis-${ACTIVEMQ_ARTEMIS_VERSION}-bin.tar.gz KEYS apache-artemis-${ACTIVEMQ_ARTEMIS_VERSION}-bin.tar.gz.asc
 ---> Running in 7f0ea558e7f4
The command '/bin/sh -c cd /opt && wget -q https://repository.apache.org/content/repositories/releases/org/apache/activemq/apache-artemis/${ACTIVEMQ_ARTEMIS_VERSION}/apache-artemis-${ACTIVEMQ_ARTEMIS_VERSION}-bin.tar.gz &&   wget -q https://repository.apache.org/content/repositories/releases/org/apache/activemq/apache-artemis/${ACTIVEMQ_ARTEMIS_VERSION}/apache-artemis-${ACTIVEMQ_ARTEMIS_VERSION}-bin.tar.gz.asc &&   wget -q http://apache.org/dist/activemq/KEYS &&   gpg --import KEYS &&   gpg apache-artemis-${ACTIVEMQ_ARTEMIS_VERSION}-bin.tar.gz.asc &&   tar xfz apache-artemis-${ACTIVEMQ_ARTEMIS_VERSION}-bin.tar.gz &&   ln -s apache-artemis-${ACTIVEMQ_ARTEMIS_VERSION} apache-artemis &&   rm -f apache-artemis-${ACTIVEMQ_ARTEMIS_VERSION}-bin.tar.gz KEYS apache-artemis-${ACTIVEMQ_ARTEMIS_VERSION}-bin.tar.gz.asc' returned a non-zero code: 4

Any idea why this is happening ?

Messages not persisted on docker container stop/start/restart or docker host restart

If the docker host (in this instance, a boot2docker linux VM) and the container it is running on is restarted, then the messages sitting in an queue are not persisted.

Steps:

  1. docker run --name artemis --restart=always --mount source=artemis-db-volume,target=/var/lib/artemis/data -d -p 8161:8161 -p 5672:5672 -e 'ARTEMIS_MIN_MEMORY=256M' -e 'ARTEMIS_MAX_MEMORY=512M'

  2. .net tester app pushes a number or a number of messages to the queue. No consumers configured and no purge if no consumers flag NOT set

  3. VM is restarted or docker container stop/start commands are run:

  4. queue is still there, but message count is 0.

Not possible to extend image due to COPY feeding into ENTRYPOINT.

I need to be able to do some pre-processing prior to starting Artemis -- specifically around obtaining and installing certificates specific to my environment, but, because the COPY feeds directly into the ENTRYPOINT clause, there is no way to insert new actions -- as overriding the ENTRYPOINT prevents the prior COPY from being invoked.

There are two options:

  1. make the COPY statement stand alone:
    e.g. COPY "assets/docker-entrypoint.sh" "./docker-entrypoint.sh"

  2. add a clause to docker-entrypoint.sh that looks for another, optional script to run. e.g.
    if [[ -f ./preprocess.sh ]]; then
    ./preprocess.sh
    fi

Error when use with volume

[org.apache.activemq.artemis.core.server] AMQ222141: Node Manager can not open file /var/lib/artemis/./data/journal/server.lock: java.io.IOException: No such file or directory
        at java.io.UnixFileSystem.createFileExclusively(Native Method) [rt.jar:1.8.0_151]

My yml
version: '3' services: amq-artemis: image: vromero/activemq-artemis:2.4.0 ports: - 8161:8161 - 61616:61616 - 1199:1199 - 1198:1198 environment: ARTEMIS_USERNAME: admin ARTEMIS_PASSWORD: admin ARTEMIS_MIN_MEMORY: 256M ARTEMIS_MAX_MEMORY: 1024M ARTEMIS_PERF_JOURNAL: AUTO ENABLE_JMX: 'true' JMX_PORT: 1199 JMX_RMI_PORT: 1198 volumes: - /opt/docker/artemis/data:/var/lib/artemis/data

Will not run with mounted non-empty etc

If you run the image with a mounted etc, containing custom configuration, it will crash with the message:

error: exec: "./artemis": stat ./artemis: no such file or directory

Note that this only happens when you use docker run to start a new container with an existing etc mount point. Stopping and starting an existing container works. This makes using this image in Docker Cloud with etc mounted pretty impossible.

This happens because of the if in docker-entrypoint.sh:

if [ ! "$(ls -A /var/lib/artemis/etc)" ]

which stops the broker from being created if the mounted etc is detected. A crude workaround could be like this:

if [ ! "$(ls -A /var/lib/artemis/bin)" ]; then

	# Copy mounted etc, if existing
	if [ "$(ls -A /var/lib/artemis/etc)" ]; then
		cd /var/lib/artemis
		cp -r etc etc_copy
	fi

	# Create broker instance
	cd /var/lib && \
	  /opt/apache-artemis-1.5.0/bin/artemis create artemis \
	    --force \
		--home /opt/apache-artemis \
		--user $EFFECTIVE_ARTEMIS_USERNAME \
		--password $EFFECTIVE_ARTEMIS_PASSWORD \
		--role amq \
		--require-login \
		--cluster-user artemisCluster \
		--cluster-password simetraehcaparetsulc

	# Replace broker etc with mounted
	if [ "$(ls -A /var/lib/artemis/etc_copy)" ]; then
		cd artemis
		rm -f etc/*
		mv etc_copy/* etc/
		rm -r etc_copy
        else
		# Ports are only exposed with an explicit argument, there is no need to binding
		# the web console to localhost
		cd /var/lib/artemis/etc && \
		  xmlstarlet ed -L -N amq="http://activemq.org/schema" \
			-u "/amq:broker/amq:web/@bind" \
			-v "http://0.0.0.0:8161" bootstrap.xml
	fi

	chown -R artemis.artemis /var/lib/artemis
	
	cd $WORKDIR
fi

Note that this will move the mounted config files, which is not necessarily good. A better approach might be to create the broker in a temporary folder and merge the necessary files properly without messing around with the mounted files.

Sending message through console on a queue is giving error

untitled
I am running docker image with the following command -
docker run -it --rm -p 8161:8161 -p 61616:61616 vromero/activemq-artemis.

Then if I go to artmeis console at 8161 port and try to send a message on a queue, it gives me below error message (as can be seen in the attached screenshot) -

[Core] Operation sendMessage(java.util.Map, int, java.lang.String, boolean, java.lang.String, java.lang.String) failed due to: java.lang.IllegalStateException : AMQ119213: User: null does not have permission='SEND' for queue DLQ on address DLQ

Support of configuration snippet override

Problem

Currently the way a user can change a broker configuration is by dropping a file called broker.xml into the etc-override folder in the broker directory. This configuration mechanism is sufficient in case we only need to add a unique configuration layer on top of the artemis docker image.

We want to provide a way where users can extend/override the broker's configuration with multiple files that can be added in different layers.

The same issue is presented with custom transformations. Where a custom transformation has to be applied over the latests broker.xml in the parent layer.

Example

Consider the following example:

A team alpha creates an artemis image with cluster configuration, for that, it creates a new layer on top of the artemis base image with a broker.xml file that adds cluster configuration to the base image.

At the same time a team beta wants to deploy artemis with a cluster configuration and default queues defined.

It would be very useful for beta team to use alpha image as base image for their implementation.

Proposed solution.

Instead of using a single broker.xml configuration file in order to extend/override the base broker.xml configuration, a set of configuration snippets could be used. Each snippet will extend/override the previous configuration. So for the previous example, the alpha team would create a snippet that would add clustering configuration, and beta would only add the default queues configuration to the same folder. The beta team would also be able to apply a custom transformation over the broker.xml of the alpha team. The transformation will be perform right before the merge.

Details.

Inside the etc-override folder allow users to drop xml files that will hold the piece of broker configuration the would be extended by that file.

The format name of those files will be:

  • broker-{{desc}}.xml where desc is a descriptive index of the configuration to be merge.
  • broker-{{desc}}.xslt where desc is the same as the transformation file.

Both files are optional, one file does not require the other.

The configuration will be applied following the index order.

Example solution.

alpha team would add the following file:

broker-00.xml:

 <configuration xmlns="urn:activemq"
                   xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
                   xsi:schemaLocation="urn:activemq /schema/artemis-configuration.xsd">

      <core xmlns="urn:activemq:core">

        <jmx-management-enabled>true</jmx-management-enabled>
        <persistence-enabled>true</persistence-enabled>
        <cluster-user>exampleUser</cluster-user>
        <cluster-password>secret</cluster-password><connectors>
            <connector name="open-numbat-activemq-artemis-0">tcp://open-numbat-activemq-artemis-0.open-numbat-activemq-artemis.default.svc.cluster.local:61616</connector>
            <connector name="open-numbat-activemq-artemis-1">tcp://open-numbat-activemq-artemis-1.open-numbat-activemq-artemis.default.svc.cluster.local:61616</connector>
          
        </connectors>
        <cluster-connections>
          <cluster-connection name="replication-cluster">
            <address>jms</address>
            <connector-ref>open-numbat-activemq-artemis-0</connector-ref>
            <retry-interval>1000</retry-interval>
            <retry-interval-multiplier>1.1</retry-interval-multiplier>
            <max-retry-interval>5000</max-retry-interval>
            <initial-connect-attempts>-1</initial-connect-attempts>
            <reconnect-attempts>-1</reconnect-attempts>
            <message-load-balancing>OFF</message-load-balancing>
            <max-hops>1</max-hops>

            <static-connectors allow-direct-connections-only="true">    
                <connector-ref>open-numbat-activemq-artemis-0</connector-ref>
                <connector-ref>open-numbat-activemq-artemis-1</connector-ref>
            </static-connectors>
         </cluster-connection>
       </cluster-connections>

       <ha-policy>
         <replication>
           <master>
             <check-for-live-server>false</check-for-live-server>
           </master>
         </replication>
       </ha-policy>
      </core>
    </configuration>

And beta team will add the following:

broker-01.xml:

<queues>
   <queue name="jms.queue.selectorQueue">
      <address>jms.queue.selectorQueue</address>
      <filter string="color='red'"/>
      <durable>true</durable>
    </queue>
</queues>

Container does not appear to terminate gracefully

I'm using 2.6.2, the SIGTERM signal gets lost as far as I can see and when I issue docker stop I never see this is the logs:

2018-07-31 01:12:30,398 INFO [org.apache.activemq.artemis.core.server] AMQ221002: Apache ActiveMQ Artemis Message Broker version 2.6.2 [c88bc5de-945e-11e8-8b55-0242ac110002] stopped, uptime 5.710 seconds

To reproduce:

$ docker run -d -e ARTEMIS_PERF_JOURNAL=ALWAYS --name graceful-artemis vromero/activemq-artemis
$ docker logs -f graceful-artemis
$ docker stop graceful-artemis
$ docker logs -f graceful-artemis

I do get graceful shutdown when running as follows in -it mode:

$ docker run -it --name vromero-artemis vromero/activemq-artemis

The context is in clusters graceful shutdown is important for message redistribution.

My humble apologies if this is a PEBKAC issue.

Alpine build error

Hi @vromero
Thank you for your work:)
How we can to download git for 2.4.0-alpine image?

Step 13/27 : COPY merge.xslt /opt/merge
lstat merge.xslt: no such file or directory

If i'm not wrong, we can to download a Dockerfile from 2.4.0 alpine only.
So, can u give access to artemis-2.4.0-alpine folder from this repo?

Active mq artemis 1.5.5 does not properly merge xml

And now for something completely different. For arrtemis 1.5.5 when giving an extra config block (broker-00.xml) with the following contents:

<?xml version='1.0' encoding="UTF-8" standalone="no"?> <configuration xmlns="urn:activemq" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="urn:activemq /schema/artemis-configuration.xsd"> <jms xmlns="urn:activemq:jms"> <queue name="anQueueName"/> </jms> </configuration>

Then anQueueName is not deployed, instead the following file is generated:

<configuration` xmlns="urn:activemq" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
               xsi:schemaLocation="urn:activemq /schema/artemis-configuration.xsd">
    <jms xmlns="urn:activemq:jms">
        <queue name="DLQ"></queue>
        <queue name="ExpiryQueue"></queue>
    </jms>
    <core xmlns="urn:activemq:core">
        <name>0.0.0.0</name>
        <persistence-enabled>true</persistence-enabled>
        <journal-type>ASYNCIO</journal-type>
        <paging-directory>./data/paging</paging-directory>
        <bindings-directory>./data/bindings</bindings-directory>
        <journal-directory>./data/journal</journal-directory>
        <large-messages-directory>./data/large-messages</large-messages-directory>
        <journal-datasync>true</journal-datasync>
        <journal-min-files>2</journal-min-files>
        <journal-pool-files>-1</journal-pool-files>
        <journal-buffer-timeout>640000</journal-buffer-timeout>
        <disk-scan-period>5000</disk-scan-period>
        <max-disk-usage>90</max-disk-usage>
        <global-max-size>104857600</global-max-size>
        <acceptors>
            <acceptor name="artemis">
                tcp://0.0.0.0:61616?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=CORE,AMQP,STOMP,HORNETQ,MQTT,OPENWIRE
            </acceptor>
            <acceptor name="amqp">tcp://0.0.0.0:5672?protocols=AMQP</acceptor>
            <acceptor name="stomp">tcp://0.0.0.0:61613?protocols=STOMP</acceptor>
            <acceptor name="hornetq">tcp://0.0.0.0:5445?protocols=HORNETQ,STOMP</acceptor>
            <acceptor name="mqtt">tcp://0.0.0.0:1883?protocols=MQTT</acceptor>
        </acceptors>
        <security-settings>
            <security-setting match="#">
                <permission type="createNonDurableQueue" roles="amq"></permission>
                <permission type="deleteNonDurableQueue" roles="amq"></permission>
                <permission type="createDurableQueue" roles="amq"></permission>
                <permission type="deleteDurableQueue" roles="amq"></permission>
                <permission type="consume" roles="amq"></permission>
                <permission type="browse" roles="amq"></permission>
                <permission type="send" roles="amq"></permission>
                <permission type="manage" roles="amq"></permission>
            </security-setting>
        </security-settings>
        <address-settings>
            <address-setting match="#">
                <dead-letter-address>jms.queue.DLQ</dead-letter-address>
                <expiry-address>jms.queue.ExpiryQueue</expiry-address>
                <redelivery-delay>0</redelivery-delay>
                <max-size-bytes>-1</max-size-bytes>
                <message-counter-history-day-limit>10</message-counter-history-day-limit>
                <address-full-policy>PAGE</address-full-policy>
            </address-setting>
        </address-settings>
    </core>
    <jms xmlns="urn:activemq:jms">
        <queue name="anQueueName"></queue>
    </jms>
</configuration>

Therefore not deploying the queue anQueueName
The merge functionallity works fine for 2.6.0

Remove LGPL XSLT merge

The merger for XML written in XSLT has a LGPL license. Its unclear to me if given the non-linkable nature of XSLT this is a problem, but just in case, remove for some other implementation.

Openshift error finding logging.properties

Hello Victor,

Have you had success running this in OpenShift?
Running this in Docker locally works great for me.
Running in OpenShift seems very close:

minishift start --vm-driver vmwarefusion
oc login -u system:admin
oc new-app --name=artemis vromero/activemq-artemis
oc expose service artemis --port=61616
oc get pods
# the pod name for artemis is artemis-1-db4hr
oc logs artemis-1-db4hr -c artemis

The error I'm getting is:

sed: can't read ../etc/logging.properties: No such file or directory

Can you think of a reason why logging.properties would not be found?

Thanks

mqtt broker doesn't process messages

I started a container with all default values, can use the management interface, where I can also see the connected clients. I use mqtt...

vromero/activemq-artemis:latest-alpine "/docker-entrypoint.…" About an hour ago Up About an hour 0.0.0.0:32809->1883/tcp, 0.0.0.0:32808->5445/tcp, 0.0.0.0:32807->5672/tcp, 0.0.0.0:32806->8161/tcp, 0.0.0.0:32805->61613/tcp, 0.0.0.0:32804->61616/tcp

despite of all these, I cannot deliver a single message. no error message on the publisher clients side, but no message at all on the subscriber side.

any way to raise the loglevel, or any recommendation on how to look after the problem?

Starting with configuration snippet (broker-00.xml) fails

When I try to run the container with the example configuration snippet (broker-00.xml), booting fails with the following message:

Merging input with '/var/lib/artemis//etc-override/broker-00.xml'
[Fatal Error] :147:18: The markup in the document following the root element must be well-formed.
Exception in thread "main" org.xml.sax.SAXParseException; lineNumber: 147; columnNumber: 18; The markup in the document following the root element must be well-formed.
at com.sun.org.apache.xerces.internal.parsers.DOMParser.parse(DOMParser.java:257)
at com.sun.org.apache.xerces.internal.jaxp.DocumentBuilderImpl.parse(DocumentBuilderImpl.java:339)
at org.apache.activemq.artemis.utils.XMLUtil.readerToElement(XMLUtil.java:89)
at org.apache.activemq.artemis.utils.XMLUtil.stringToElement(XMLUtil.java:55)
at org.apache.activemq.artemis.core.config.FileDeploymentManager.readConfiguration(FileDeploymentManager.java:76)
at org.apache.activemq.artemis.cli.commands.Configurable.getFileConfiguration(Configurable.java:93)
at org.apache.activemq.artemis.cli.commands.Run.execute(Run.java:64)
at org.apache.activemq.artemis.cli.Artemis.internalExecute(Artemis.java:125)
at org.apache.activemq.artemis.cli.Artemis.execute(Artemis.java:82)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.activemq.artemis.boot.Artemis.execute(Artemis.java:129)
at org.apache.activemq.artemis.boot.Artemis.main(Artemis.java:49)

When I export the container, the broker.xml seems OK to me (so something else is broken?).

the /etc/broker.xml is not merged with /etc-override/broker-00.xml

Dear Developers,

I'm using vromero/activemq-artemis:2.3.0-alpine (and tested also vromero/activemq-artemis:2.6.0-alpine)
minikube version: v0.27.0 (and tested also 0.26.1 and 0.25.2)

A few weeks ago the minikube yaml works correctly and the artemis pod was created with the queues were deployed when starting up the pod.

If I now log into the artemis-0 pod and navigate to /var/lib/artemis/etc-override the broker-00.xml is present.
and if I navigate to /var/lib/artemis/etc/broker.xml I see that the broker-00.xml was not merged with this file. This used to be the case. So when deploying the pod the queue were also deployed and then can be used.

the top part of the broker-00.xml is:

<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<configuration xmlns="urn:activemq" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="urn:activemq /schema/artemis-configuration.xsd">

<core xmlns="urn:activemq:core" xsi:schemaLocation="urn:activemq:core ">
    <security-settings>
       <security-setting match="#">
          <permission type="createNonDurableQueue" roles="amq"/>
          <permission type="deleteNonDurableQueue" roles="amq"/>
          <permission type="createDurableQueue" roles="amq"/>
          <permission type="deleteDurableQueue" roles="amq"/>
          <permission type="consume" roles="guest"/>
          <permission type="send" roles="guest"/>
          <!-- we need this otherwise ./artemis data imp wouldn't work -->
          <permission type="manage" roles="amq"/>
          <!-- the indexer must be able to browse the queue -->
          <permission type="browse" roles="amq"/>
       </security-setting>
    </security-settings>

    <addresses>
        <address name="DLQ">
          <anycast>
             <queue name="DLQ"/>
          </anycast>
        </address>
        <address name="ExpiryQueue">
          <anycast>
             <queue name="ExpiryQueue"/>
          </anycast>
        </address>


        <address name="accessQueue">
          <anycast>
            <queue name="accessQueue">
               <durable>true</durable>
            </queue>
          </anycast>
        </address>

Do you need more information, then please contact me?

I hope you can provide a solution for this problem.

Kind regards,
Egbert

2.2.0 tags not pushed/do not exist on docker hub

Thanks for the great job, this is one of the best artemis images on the docker hub however while trying to setup my test environment I came across the fact that the recent tags of 2.2.0 and 2.2.0-alpine are not properly pushed or created on docker hub, could you please fix this?

Use artemis without user and password

So far I see that the user and password are needed to connect with ActiveMQ.

We have the ActiveMQ inside our VPC, subnet and security group basically to allow the application to connect with the queue without credentials.

Is there any way to bypass the authentication?

ARTEMIS_MAX_MEMORY environment variable is not respected

The default value in artemis.profile is "-Xmx2G" so the replacement is not working.

Solution

# Update min memory if the argument is passed
if [[ "$ARTEMIS_MIN_MEMORY" ]]; then
  sed -i "s/-Xms[^ ]*/-Xms$ARTEMIS_MIN_MEMORY/g" ../etc/artemis.profile
fi

# Update max memory if the argument is passed
if [[ "$ARTEMIS_MAX_MEMORY" ]]; then
  sed -i "s/-Xmx[^ ]*/-Xmx$ARTEMIS_MAX_MEMORY/g" ../etc/artemis.profile
fi

I got an error when building the image

Hello,

I get this error when building the image from the Dockerfile in Windows

The command '/bin/sh -c set -x && apt-get update && apt-get install -y --no-install-recommends ca-certificates wget && rm -rf /var/lib/apt/lists/* && dpkgArch="$(dpkg --print-architecture | awk -F- '{ print $NF }')" && wget -O /usr/local/bin/gosu "https://github.com/tianon/gosu/releases/download/$GOSU_VERSION/gosu-$dpkgArch" && wget -O /usr/local/bin/gosu.asc "https://github.com/tianon/gosu/releases/download/$GOSU_VERSION/gosu-$dpkgArch.asc" && export GNUPGHOME="$(mktemp -d)" && (gpg --keyserver ha.pool.sks-keyservers.net --recv-keys B42F6819007F00F88E364FD4036A9C25BF357DD4 || gpg --keyserver keyserver.ubuntu.com --recv-keys B42F6819007F00F88E364FD4036A9C25BF357DD4) && gpg --batch --verify /usr/local/bin/gosu.asc /usr/local/bin/gosu && rm -rf "$GNUPGHOME" /usr/local/bin/gosu.asc && chmod +x /usr/local/bin/gosu && gosu nobody true' returned a non-zero code: 2

On unix the error is

The command '/bin/sh -c set -x && apt-get update && apt-get install -y --no-install-recommends ca-certificates wget && rm -rf /var/lib/apt/lists/* && dpkgArch="$(dpkg --print-architecture | awk -F- '{ print $NF }')" && wget -O /usr/local/bin/gosu "https://github.com/tianon/gosu/releases/download/$GOSU_VERSION/gosu-$dpkgArch" && wget -O /usr/local/bin/gosu.asc "https://github.com/tianon/gosu/releases/download/$GOSU_VERSION/gosu-$dpkgArch.asc" && export GNUPGHOME="$(mktemp -d)" && (gpg --keyserver ha.pool.sks-keyservers.net --recv-keys B42F6819007F00F88E364FD4036A9C25BF357DD4 || gpg --keyserver keyserver.ubuntu.com --recv-keys B42F6819007F00F88E364FD4036A9C25BF357DD4) && gpg --batch --verify /usr/local/bin/gosu.asc /usr/local/bin/gosu && rm -rf "$GNUPGHOME" /usr/local/bin/gosu.asc && chmod +x /usr/local/bin/gosu && gosu nobody true' returned a non-zero code: 4

Configure cluster in kuberntes with HA

Hello,

I've use your chart to deploy a cluster in kubernetes with HA.

But when deploying the SatefulSet

I got this error on pod-0


14:47:20,496 INFO  [org.apache.activemq.artemis.core.server] AMQ221034: Waiting indefinitely to obtain live lock
14:47:20,497 INFO  [org.apache.activemq.artemis.core.server] AMQ221035: Live Server Obtained live lock
14:47:20,655 ERROR [org.apache.activemq.artemis.core.client] AMQ214016: Failed to create netty connection: java.net.UnknownHostException: jms-service-1.jms-service.default.svc.cluster.local
	at java.net.InetAddress.getAllByName0(InetAddress.java:1280) [rt.jar:1.8.0_162]
	at java.net.InetAddress.getAllByName(InetAddress.java:1192) [rt.jar:1.8.0_162]
	at java.net.InetAddress.getAllByName(InetAddress.java:1126) [rt.jar:1.8.0_162]
	at java.net.InetAddress.getByName(InetAddress.java:1076) [rt.jar:1.8.0_162]
	at io.netty.util.internal.SocketUtils$8.run(SocketUtils.java:146) [netty-all-4.1.16.Final.jar:4.1.16.Final]
	at io.netty.util.internal.SocketUtils$8.run(SocketUtils.java:143) [netty-all-4.1.16.Final.jar:4.1.16.Final]
	at java.security.AccessController.doPrivileged(Native Method) [rt.jar:1.8.0_162]
	at io.netty.util.internal.SocketUtils.addressByName(SocketUtils.java:143) [netty-all-4.1.16.Final.jar:4.1.16.Final]
	at io.netty.resolver.DefaultNameResolver.doResolve(DefaultNameResolver.java:43) [netty-all-4.1.16.Final.jar:4.1.16.Final]
	at io.netty.resolver.SimpleNameResolver.resolve(SimpleNameResolver.java:63) [netty-all-4.1.16.Final.jar:4.1.16.Final]
	at io.netty.resolver.SimpleNameResolver.resolve(SimpleNameResolver.java:55) [netty-all-4.1.16.Final.jar:4.1.16.Final]
	at io.netty.resolver.InetSocketAddressResolver.doResolve(InetSocketAddressResolver.java:57) [netty-all-4.1.16.Final.jar:4.1.16.Final]
	at io.netty.resolver.InetSocketAddressResolver.doResolve(InetSocketAddressResolver.


Here is my SatetulSet


apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
  name: jms-service
  labels: 
     app: jms-service
spec:
 serviceName: jms-service
 replicas: 2
 selector:
    matchLabels:
        app: jms-service
 template:
    metadata:
      labels:
         app: jms-service
    spec:  
        containers:
        - name: jms-service
          image: kube-registry:5000/tk/jms-service:2.5
          ports:
            - containerPort: 8161
              name: http
            - containerPort: 61616
              name: core
            - containerPort: 5672
              name: amqp
          env:
            - name: ARTEMIS_USERNAME
              value: admin
            - name: ARTEMIS_PASSWORD
              value: admin
          volumeMounts:
           - name: config-override
             mountPath: /var/lib/artemis/etc-override
           - name: config-override-template
             mountPath: /var/lib/artemis/etc-override-template
          imagePullPolicy: Always
        initContainers:
        - name: init-myservice
          image: kube-registry:5000/tk/jms-service:2.5
          command: ['/bin/bash', '/var/lib/artemis/etc-override-template/configure-cluster.sh']
          volumeMounts:
          - name: data
            mountPath: /var/lib/artemis/data
          - name: config-override
            mountPath: /var/lib/artemis/etc-override
          - name: config-override-template
            mountPath: /var/lib/artemis/etc-override-template

        volumes:
        - name: config-override
          emptyDir: {}
        - name: config-override-template
          configMap:
            name: jms-service-map
        - name: data
          emptyDir: {}

And the Headless Service


apiVersion: v1
kind: Service
metadata:
  name:  jms-service
  annotations:
      # Make sure DNS is resolvable during initialization.
      service.alpha.kubernetes.io/tolerate-unready-endpoints: "true"
spec:
  publishNotReadyAddresses: true
  ports:
    - port: 8161
      name: http
      targetPort: http
    - port: 61616
      name: core
      targetPort: core
    - port: 5672
      name: amqp
      targetPort: amqp
  clusterIP: None
  selector:
    app: jms-service


But if i delete the pod-0 then the pod starts OK, but then this warn is on pod-1

15:04:37,825 WARN [org.apache.activemq.artemis.core.client] AMQ212037: Connection failure has been detected: AMQ119015: The connection was disconnected because of server shutdown [code=DISCONNECTED]
15:04:37,842 WARN [org.apache.activemq.artemis.core.client] AMQ212037: Connection failure has been detected: AMQ119015: The connection was disconnected because of server shutdown [code=DISCONNECTED]

And when trying to acess the console, i can't login with the username and pass.. Well i can login but with a lot of retries, The login sometime works and sometime don't

Any help ?

<name> shoud be hostname

In order to have a better experience in the console the element in broker.xml should be the host name.

Failed to connect to server

I run docker command with/without username/pass and I got all proper INFO messages, but cannot connect to http://0.0.0.0:8161/console and also cannot use 0.0.0.0:1883 when try to connect to MQTT broker.

Do I need to set up a custom address and override default configuration?

PS. I'm using macOS and 17.09.0-ce Docker version

artemis clustering with ha

Hi,

From what I can tell clustering is not yet supported although there was some talk of it on the thread that lead to the creation of the activemq-artemis-docker image.

I believe that a

  • symmetric cluster; with
  • colocated replication

... would fit most expectations regarding clustering and ha. In any case this is what I aim for.

A clustered Artemis Docker image should work with any number of replicas in a Docker Composer file similar to

version: "3"
services:
  artemis:
    image: vromero/activemq-artemis
    ports:
      - "8161:8161"
      - "61616:61616"
      - "5445:5445"
      - "5672:5672"
      - "1883:1883"
      - "61613:61613"
    deploy:
      replicas: 4
      update_config:
        parallelism: 2
        delay: 10s
      restart_policy:
        condition: on-failure
    volumes:
     - "/var/lib/artemis/data"
     - "/var/lib/artemis/etc"

Caveat: I am taking my first baby steps using Docker so I am bound to miss something...

Ideally service discovery and cluster connections would use its own backend network, not accessible to other services defined in the stack.

Can't acces web console when installing docker image on kubernetes as a service

Hello,

I try to install artemis in kubernetes as a service.

Here are my pod and service files.

jms-service:1.0 is the same as vromero/activemq-artemis

POD:


apiVersion: v1
kind: Pod
metadata:
  name: jms-service
  labels: 
     app: jms-service
spec:
 containers:
  - name: jms-service
    image: kube-registry:5000/tk/jms-service:1.0
    ports:
      - containerPort: 8161
      - containerPort: 61616
      - containerPort: 5445
      - containerPort: 5672
      - containerPort: 1883
      - containerPort: 61613
    env:
      - name: ARTEMIS_USERNAME
        value: admin
      - name: ARTEMIS_PASSWORD
        value: admin

SERVICE


apiVersion: v1
kind: Service
metadata:
  name:  jms-service
spec:
  ports:
    - port: 8161
      nodePort: 30001
      name: webserver
    - port: 61616
      nodePort: 30002
      name: core
    - port: 5445
      nodePort: 30003
      name: hornetq
    - port: 5672
      nodePort: 30004
      name: amqp
    - port: 1883
      nodePort: 30005
      name: mqtt
    - port: 61613
      nodePort: 30006
      name: stomp
  selector:
    app: jms-service
  type:
    NodePort
   

The service start ok and got no error but when accessing the console in http://host:30001/console after login i got this error every few seconds in the console of the web


ARTEMIS] plugin running [object Object]
[ARTEMIS] *************creating Artemis Console************
[activemq] ActiveMQ theme loaded
[Core] ActiveMQ Management Console started
[Core] Operation unknown failed due to: java.lang.Exception : Origin http://srvaxivln090:30001 is not allowed to call this agent
[Core] Operation unknown failed due to: java.lang.Exception : Origin http://srvaxivln090:30001 is not allowed to call this agent
[Window] Uncaught TypeError: Cannot read property 'apply' of undefined (http://srvaxivln090:30001/console/app/app.js?0d5300a336117972:16:14366)
[Window] Uncaught TypeError: Cannot read property 'apply' of undefined (http://srvaxivln090:30001/console/app/app.js?0d5300a336117972:16:14366)
[Window] Uncaught TypeError: Cannot read property 'apply' of undefined (http://srvaxivln090:30001/console/app/app.js?0d5300a336117972:16:14366)

What am i doing wrong ? Is it necessary to bind any other port ?

Thank you

Alpine linux support...

Hi,

I really like your image, but in an effort to make it smaller, I started playing around the idea of using Alpine Linux and Oracle Java instead of the java:8 base image which tends to be larger and uses the OpenJDK instead.

It would be nice to support Artemis in such setup, I tried to implement it but I'm getting some problems with the volume setup, would you like to help me out to get this to work?

It would be a nice alternative to your image.

Thanks

Build automated tests

Build automated tests for all the docker image specific features and integrate them in the build process.

Dockerizing Apache Active MQ Artemis Master/Slave config not working

I was able to successfully setup Apache ActiveMQ Artemis Master/Slave replication on my 2 VM cluster.

VM1 : 172.29.219.89

VM2 : 172.29.219.104

My broker.xml for Master node is :

  <connectors>
    <connector name="artemis">tcp://172.29.219.89:61616</connector>
    <connector name="cluster-connector">tcp://172.29.219.104:61616</connector>
  </connectors> 

  <cluster-user>cluster-user</cluster-user>
  <cluster-password>cluster-password</cluster-password>

  <cluster-connections>
   <cluster-connection name="cluster1">
    <address>*</address>
    <connector-ref>artemis</connector-ref>
    <retry-interval>1000</retry-interval>
    <message-load-balancing>ON_DEMAND</message-load-balancing>
    <max-hops>1</max-hops>
     <static-connectors>
      <connector-ref>cluster-connector</connector-ref>
     </static-connectors>
   </cluster-connection>
  </cluster-connections>


  <ha-policy>
    <replication>
     <master>
        <check-for-live-server>true</check-for-live-server>
     </master>
    </replication>
  </ha-policy>

My broker.xml for Slave node is :

  <connectors>
    <connector name="artemis">tcp://172.29.219.104:61616</connector>
    <connector name="cluster-connector">tcp://172.29.219.89:61616</connector>
  </connectors> 

  <cluster-user>cluster-user</cluster-user>
  <cluster-password>cluster-password</cluster-password>

  <cluster-connections>
   <cluster-connection name="cluster1">
    <address>*</address>
    <connector-ref>artemis</connector-ref>
    <retry-interval>1000</retry-interval>
    <message-load-balancing>ON_DEMAND</message-load-balancing>
    <max-hops>1</max-hops>
     <static-connectors>
      <connector-ref>cluster-connector</connector-ref>
     </static-connectors>
   </cluster-connection>
  </cluster-connections>


  <ha-policy>
    <replication>
     <slave>
         <allow-failback>true</allow-failback>
     </slave>
    </replication>
  </ha-policy>

The above configuration when deployed on just the 2 VMs works perfectly fine. As soon as I take the Master down, the failover is instantaneous and when I bring back the master, the fail back is instantaneous too.

Now I want to dockerize this.

My Docker file is :

COPY initialize.sh /

RUN  chmod a+x initialize.sh

RUN yum clean all && yum install -y unzip java-1.8.0-openjdk.x86_64

RUN curl -f -L -o apache-artemis-2.4.0-bin.zip http://apache.mirrors.spacedump.net/activemq/activemq-artemis/2.4.0/apache-artemis-2.4.0-bin.zip

RUN unzip -qd /opt apache-artemis-2.4.0-bin.zip

EXPOSE 8080 61616 5672 61613 5445 1883 

ENTRYPOINT [ "/initialize.sh" ] 

The initialize.sh just setups the brokers and loads the respective broker.xml files for Master and Slave configs.

My Docker container for Master is deployed on Master node. I start the docker container with the command :

docker run -p 8080:8080 -p 61616:61616 -p 5672:5672 -p 61613:61613 -p 5445:5445 -p 1883:1883 <container-id> --state master

My Docker container for Slave is deployed on Slave node. I start the docker container with

docker run -p 8080:8080 -p 61616:61616 -p 5672:5672 -p 61613:61613 -p 5445:5445 -p 1883:1883 <container-id> --state slave

My broker.xml config is the same that I am loading into the containers.

But in this case when I take down the Master, the failover takes over 1 min to happen.

The logs are ::

14:32:43,532 INFO  [org.apache.activemq.artemis.core.server] AMQ221066: Initiating quorum vote: LiveFailoverQuorumVote
14:32:43,535 INFO  [org.apache.activemq.artemis.core.server] AMQ221067: Waiting 30 seconds for quorum vote results.
14:32:43,535 INFO  [org.apache.activemq.artemis.core.server] AMQ221068: Received all quorum votes.
14:32:43,536 INFO  [org.apache.activemq.artemis.core.server] AMQ221071: Failing over based on quorum vote results.
14:32:43,561 INFO  [org.apache.activemq.artemis.core.server] AMQ221037: ActiveMQServerImpl::serverUUID=83d3b7c9-285d-11e8-bfc9-0242ac110001 to become 'live'
14:32:43,591 WARN  [org.apache.activemq.artemis.core.client] AMQ212004: Failed to connect to server.
14:32:43,854 INFO  [org.apache.activemq.artemis.core.server] AMQ221003: Deploying queue DLQ on address DLQ
14:32:43,855 INFO  [org.apache.activemq.artemis.core.server] AMQ221003: Deploying queue ExpiryQueue on address ExpiryQueue
14:32:44,261 INFO  [org.apache.activemq.artemis.core.server] AMQ221007: Server is now live
14:32:44,318 INFO  [org.apache.activemq.artemis.core.server] AMQ221020: Started EPOLL Acceptor at 0.0.0.0:61616 for protocols [CORE,MQTT,AMQP,STOMP,HORNETQ,OPENWIRE]
14:32:44,345 INFO  [org.apache.activemq.artemis.core.server] AMQ221020: Started EPOLL Acceptor at 0.0.0.0:5445 for protocols [HORNETQ,STOMP]
14:32:44,348 INFO  [org.apache.activemq.artemis.core.server] AMQ221020: Started EPOLL Acceptor at 0.0.0.0:5672 for protocols [AMQP]
14:32:44,365 INFO  [org.apache.activemq.artemis.core.server] AMQ221020: Started EPOLL Acceptor at 0.0.0.0:1883 for protocols [MQTT]
14:32:44,368 INFO  [org.apache.activemq.artemis.core.server] AMQ221020: Started EPOLL Acceptor at 0.0.0.0:61613 for protocols [STOMP]

And the fail back does not occur at all when the Master is back up.

On the slave container all I see in the logs is :

14:34:29,464 INFO  [org.apache.activemq.artemis.core.server] AMQ221027: Bridge ClusterConnectionBridge@66d554c6 [name=$.artemis.internal.sf.cluster1.f4ad2f1c-285d-11e8-acde-0242ac110001, queue=QueueImpl[name=$.artemis.internal.sf.cluster1.f4ad2f1c-285d-11e8-acde-0242ac110001, postOffice=PostOfficeImpl [server=ActiveMQServerImpl::serverUUID=83d3b7c9-285d-11e8-bfc9-0242ac110001], temp=false]@21ef670d targetConnector=ServerLocatorImpl (identity=(Cluster-connection-bridge::ClusterConnectionBridge@66d554c6 [name=$.artemis.internal.sf.cluster1.f4ad2f1c-285d-11e8-acde-0242ac110001, queue=QueueImpl[name=$.artemis.internal.sf.cluster1.f4ad2f1c-285d-11e8-acde-0242ac110001, postOffice=PostOfficeImpl [server=ActiveMQServerImpl::serverUUID=83d3b7c9-285d-11e8-bfc9-0242ac110001], temp=false]@21ef670d targetConnector=ServerLocatorImpl [initialConnectors=[TransportConfiguration(name=artemis, factory=org-apache-activemq-artemis-core-remoting-impl-netty-NettyConnectorFactory) ?port=61616&host=172-29-219-89], discoveryGroupConfiguration=null]]::ClusterConnectionImpl@1909325807[nodeUUID=83d3b7c9-285d-11e8-bfc9-0242ac110001, connector=TransportConfiguration(name=artemis, factory=org-apache-activemq-artemis-core-remoting-impl-netty-NettyConnectorFactory) ?port=61616&host=172-29-219-104, address=*, server=ActiveMQServerImpl::serverUUID=83d3b7c9-285d-11e8-bfc9-0242ac110001])) [initialConnectors=[TransportConfiguration(name=artemis, factory=org-apache-activemq-artemis-core-remoting-impl-netty-NettyConnectorFactory) ?port=61616&host=172-29-219-89], discoveryGroupConfiguration=null]] is connected

Does anyone have any idea why the Master/Slave replication isnt working in docker form ?

XML merges no longer working as they should

When using latest tag the image fails to properly merge XML as it had on a previous working environment.

To replicate:
docker pull vromero/activemq-artemis

broker-00.xml:

<?xml version="1.0" encoding="UTF-8" standalone="no"?>

<configuration xmlns="urn:activemq" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="urn:activemq /schema/artemis-configuration.xsd">
   <core xmlns="urn:activemq:core" xsi:schemaLocation="urn:activemq:core ">
      <security-enabled>false</security-enabled>
   </core>
</configuration>

docker run -it --rm -v /home/artemis-override/:/var/lib/artemis/etc-override vromero/activemq-artemis:2.4.0 cat ../etc/broker.xml

returns:

<configuration xmlns="urn:activemq" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xi="http://www.w3.org/2001/XInclude" xsi:schemaLocation="urn:activemq /schema/artemis-configuration.xsd">
    <core xmlns="urn:activemq:core" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="urn:activemq:core ">
        <name>0.0.0.0</name>
        <persistence-enabled>true</persistence-enabled>
        <journal-type>ASYNCIO</journal-type>
        <paging-directory>data/paging</paging-directory>
        <bindings-directory>data/bindings</bindings-directory>
        <journal-directory>data/journal</journal-directory>
        <large-messages-directory>data/large-messages</large-messages-directory>
        <journal-datasync>true</journal-datasync>
        <journal-min-files>2</journal-min-files>
        <journal-pool-files>10</journal-pool-files>
        <journal-file-size>10M</journal-file-size>
        <journal-buffer-timeout>24000</journal-buffer-timeout>
        <journal-max-io>4096</journal-max-io>
        <disk-scan-period>5000</disk-scan-period>
        <max-disk-usage>90</max-disk-usage>
        <critical-analyzer>true</critical-analyzer>
        <critical-analyzer-timeout>120000</critical-analyzer-timeout>
        <critical-analyzer-check-period>60000</critical-analyzer-check-period>
        <critical-analyzer-policy>HALT</critical-analyzer-policy>
        <acceptors>
            <acceptor name="artemis">tcp://0.0.0.0:61616?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=CORE,AMQP,STOMP,HORNETQ,MQTT,OPENWIRE;useEpoll=true;amqpCredits=1000;amqpLowCredits=300</acceptor>
            <acceptor name="amqp">tcp://0.0.0.0:5672?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=AMQP;useEpoll=true;amqpCredits=1000;amqpLowCredits=300</acceptor>
            <acceptor name="stomp">tcp://0.0.0.0:61613?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=STOMP;useEpoll=true</acceptor>
            <acceptor name="hornetq">tcp://0.0.0.0:5445?anycastPrefix=jms.queue.;multicastPrefix=jms.topic.;protocols=HORNETQ,STOMP;useEpoll=true</acceptor>
            <acceptor name="mqtt">tcp://0.0.0.0:1883?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=MQTT;useEpoll=true</acceptor>
        </acceptors>
        <security-settings>
            <security-setting match="#">
                <permission type="createNonDurableQueue" roles="amq"></permission>
                <permission type="deleteNonDurableQueue" roles="amq"></permission>
                <permission type="createDurableQueue" roles="amq"></permission>
                <permission type="deleteDurableQueue" roles="amq"></permission>
                <permission type="createAddress" roles="amq"></permission>
                <permission type="deleteAddress" roles="amq"></permission>
                <permission type="consume" roles="amq"></permission>
                <permission type="browse" roles="amq"></permission>
                <permission type="send" roles="amq"></permission>
                <permission type="manage" roles="amq"></permission>
            </security-setting>
        </security-settings>
        <address-settings>
            <address-setting match="activemq.management#">
                <dead-letter-address>DLQ</dead-letter-address>
                <expiry-address>ExpiryQueue</expiry-address>
                <redelivery-delay>0</redelivery-delay>
                <max-size-bytes>-1</max-size-bytes>
                <message-counter-history-day-limit>10</message-counter-history-day-limit>
                <address-full-policy>PAGE</address-full-policy>
                <auto-create-queues>true</auto-create-queues>
                <auto-create-addresses>true</auto-create-addresses>
                <auto-create-jms-queues>true</auto-create-jms-queues>
                <auto-create-jms-topics>true</auto-create-jms-topics>
            </address-setting>
            <address-setting match="#">
                <dead-letter-address>DLQ</dead-letter-address>
                <expiry-address>ExpiryQueue</expiry-address>
                <redelivery-delay>0</redelivery-delay>
                <max-size-bytes>-1</max-size-bytes>
                <message-counter-history-day-limit>10</message-counter-history-day-limit>
                <address-full-policy>PAGE</address-full-policy>
                <auto-create-queues>true</auto-create-queues>
                <auto-create-addresses>true</auto-create-addresses>
                <auto-create-jms-queues>true</auto-create-jms-queues>
                <auto-create-jms-topics>true</auto-create-jms-topics>
            </address-setting>
        </address-settings>
        <addresses>
            <address name="DLQ">
                <anycast>
                    <queue name="DLQ"></queue>
                </anycast>
            </address>
            <address name="ExpiryQueue">
                <anycast>
                    <queue name="ExpiryQueue"></queue>
                </anycast>
            </address>
        </addresses>
    </core>
    <core xmlns="urn:activemq:core" xsi:schemaLocation="urn:activemq:core ">
        <security-enabled>false</security-enabled>
    </core>
</configuration>

Change to known working image:
docker pull vromero/activemq-artemis@sha256:626afff517d3ec0564987b7bbce17f1f8d55f5b55c5cf282d2a6049c0c1074a8

Output:

<?xml version="1.0"?>
<!--
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements.  See the NOTICE file
distributed with this work for additional information
regarding copyright ownership.  The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License.  You may obtain a copy of the License at

  http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied.  See the License for the
specific language governing permissions and limitations
under the License.
-->
<configuration xmlns="urn:activemq" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="urn:activemq /schema/artemis-configuration.xsd">

   <!-- from 1.0.0 to 1.5.5 the following line should be : <core xmlns="urn:activemq:core"> -->
   <core xmlns="urn:activemq:core" xsi:schemaLocation="urn:activemq:core ">

      <name>0.0.0.0</name><persistence-enabled>true</persistence-enabled><!-- this could be ASYNCIO, MAPPED, NIO
           ASYNCIO: Linux Libaio
           MAPPED: mmap files
           NIO: Plain Java Files
       --><journal-type>ASYNCIO</journal-type><paging-directory>./data/paging</paging-directory><bindings-directory>./data/bindings</bindings-directory><journal-directory>./data/journal</journal-directory><large-messages-directory>./data/large-messages</large-messages-directory><journal-datasync>true</journal-datasync><journal-min-files>2</journal-min-files><journal-pool-files>-1</journal-pool-files><journal-file-size>10M</journal-file-size><!--
       This value was determined through a calculation.
       Your system could perform 5.56 writes per millisecond
       on the current journal configuration.
       That translates as a sync write every 180000 nanoseconds.

       Note: If you specify 0 the system will perform writes directly to the disk.
             We recommend this to be 0 if you are using journalType=MAPPED and ournal-datasync=false.
      --><journal-buffer-timeout>180000</journal-buffer-timeout><!--
        When using ASYNCIO, this will determine the writing queue depth for libaio.
       --><journal-max-io>4096</journal-max-io><!--
        You can verify the network health of a particular NIC by specifying the <network-check-NIC> element.
         <network-check-NIC>theNicName</network-check-NIC>
        --><!--
        Use this to use an HTTP server to validate the network
         <network-check-URL-list>http://www.apache.org</network-check-URL-list> --><!-- <network-check-period>10000</network-check-period> --><!-- <network-check-timeout>1000</network-check-timeout> --><!-- this is a comma separated list, no spaces, just DNS or IPs
           it should accept IPV6

           Warning: Make sure you understand your network topology as this is meant to validate if your network is valid.
                    Using IPs that could eventually disappear or be partially visible may defeat the purpose.
                    You can use a list of multiple IPs, and if any successful ping will make the server OK to continue running --><!-- <network-check-list>10.0.0.1</network-check-list> --><!-- use this to customize the ping used for ipv4 addresses --><!-- <network-check-ping-command>ping -c 1 -t %d %s</network-check-ping-command> --><!-- use this to customize the ping used for ipv6 addresses --><!-- <network-check-ping6-command>ping6 -c 1 %2$s</network-check-ping6-command> --><!-- how often we are looking for how many bytes are being used on the disk in ms --><disk-scan-period>5000</disk-scan-period><!-- once the disk hits this limit the system will block, or close the connection in certain protocols
           that won't support flow control. --><max-disk-usage>90</max-disk-usage><!-- should the broker detect dead locks and other issues --><critical-analyzer>true</critical-analyzer><critical-analyzer-timeout>120000</critical-analyzer-timeout><critical-analyzer-check-period>60000</critical-analyzer-check-period><critical-analyzer-policy>HALT</critical-analyzer-policy><!-- the system will enter into page mode once you hit this limit.
           This is an estimate in bytes of how much the messages are using in memory

            The system will use half of the available memory (-Xmx) by default for the global-max-size.
            You may specify a different value here if you need to customize it to your needs.

            <global-max-size>100Mb</global-max-size>

      --><acceptors>

         <!-- useEpoll means: it will use Netty epoll if you are on a system (Linux) that supports it -->
         <!-- amqpCredits: The number of credits sent to AMQP producers -->
         <!-- amqpLowCredits: The server will send the # credits specified at amqpCredits at this low mark -->

         <!-- Acceptor for every supported protocol -->
         <acceptor name="artemis">tcp://0.0.0.0:61616?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=CORE,AMQP,STOMP,HORNETQ,MQTT,OPENWIRE;useEpoll=true;amqpCredits=1000;amqpLowCredits=300</acceptor>

         <!-- AMQP Acceptor.  Listens on default AMQP port for AMQP traffic.-->
         <acceptor name="amqp">tcp://0.0.0.0:5672?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=AMQP;useEpoll=true;amqpCredits=1000;amqpMinCredits=300</acceptor>

         <!-- STOMP Acceptor. -->
         <acceptor name="stomp">tcp://0.0.0.0:61613?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=STOMP;useEpoll=true</acceptor>

         <!-- HornetQ Compatibility Acceptor.  Enables HornetQ Core and STOMP for legacy HornetQ clients. -->
         <acceptor name="hornetq">tcp://0.0.0.0:5445?protocols=HORNETQ,STOMP;useEpoll=true</acceptor>

         <!-- MQTT Acceptor -->
         <acceptor name="mqtt">tcp://0.0.0.0:1883?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=MQTT;useEpoll=true</acceptor>

      </acceptors><security-settings>
         <security-setting match="#">
            <permission type="createNonDurableQueue" roles="amq"/>
            <permission type="deleteNonDurableQueue" roles="amq"/>
            <permission type="createDurableQueue" roles="amq"/>
            <permission type="deleteDurableQueue" roles="amq"/>
            <permission type="createAddress" roles="amq"/>
            <permission type="deleteAddress" roles="amq"/>
            <permission type="consume" roles="amq"/>
            <permission type="browse" roles="amq"/>
            <permission type="send" roles="amq"/>
            <!-- we need this otherwise ./artemis data imp wouldn't work -->
            <permission type="manage" roles="amq"/>
         </security-setting>
      </security-settings><address-settings>
         <!-- if you define auto-create on certain queues, management has to be auto-create -->
         <address-setting match="activemq.management#">
            <dead-letter-address>DLQ</dead-letter-address>
            <expiry-address>ExpiryQueue</expiry-address>
            <redelivery-delay>0</redelivery-delay>
            <!-- with -1 only the global-max-size is in use for limiting -->
            <max-size-bytes>-1</max-size-bytes>
            <message-counter-history-day-limit>10</message-counter-history-day-limit>
            <address-full-policy>PAGE</address-full-policy>
            <auto-create-queues>true</auto-create-queues>
            <auto-create-addresses>true</auto-create-addresses>
            <auto-create-jms-queues>true</auto-create-jms-queues>
            <auto-create-jms-topics>true</auto-create-jms-topics>
         </address-setting>
         <!--default for catch all-->
         <address-setting match="#">
            <dead-letter-address>DLQ</dead-letter-address>
            <expiry-address>ExpiryQueue</expiry-address>
            <redelivery-delay>0</redelivery-delay>
            <!-- with -1 only the global-max-size is in use for limiting -->
            <max-size-bytes>-1</max-size-bytes>
            <message-counter-history-day-limit>10</message-counter-history-day-limit>
            <address-full-policy>PAGE</address-full-policy>
            <auto-create-queues>true</auto-create-queues>
            <auto-create-addresses>true</auto-create-addresses>
            <auto-create-jms-queues>true</auto-create-jms-queues>
            <auto-create-jms-topics>true</auto-create-jms-topics>
         </address-setting>
      </address-settings><addresses>
         <address name="DLQ">
            <anycast>
               <queue name="DLQ"/>
            </anycast>
         </address>
         <address name="ExpiryQueue">
            <anycast>
               <queue name="ExpiryQueue"/>
            </anycast>
         </address>

      </addresses><security-enabled>false</security-enabled>
   </core>
</configuration>

I feel like this was likely introduced with #50

Jolokia CORS Error

Hi, When I deploy your dokcer image to host machine then such an error occuring "Operation unknown failed due to: java.lang.Exception : Origin http://45.32.145.53:8161 is not allowed to call this agent"
image

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.