Git Product home page Git Product logo

docker-cronicle-docker's People

Contributors

allcontributors[bot] avatar bluet avatar fossabot avatar lisez avatar ngosang avatar peterbuga avatar thebestmoshe avatar txgruppi avatar winstonspencer avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

docker-cronicle-docker's Issues

PID file prevents Cronicle from starting

Whenever Cronicle exists improperly, the PID file remains in the logs directory and prevents from subsequent startup.

I noticed there has been an issue with it already but I experience this on the latest version too.

On start, old PID files are not removed.

Could you help me solve the issue or circumvent it?

Running Cronicle setup before importing config?

I noticed in the entrypoint.sh that the custom config is moved into place after the setup is run. I am building a Cronicle server that uses S3 storage, and I was having issues until I moved the setup after moving the custom config in place. I was just reaching out to see if this was tested with a custom config as I was not able to achieve the desired result without the aforementioned change to the entrypoint.sh.

Looking forward to your response! Thanks!

Forced reload of the custom configuration not works

Issue report

Describe

I am trying to use a custom configuration, however I cannot change the config with the instructions.

To Reproduce:

  1. Pull and Run the docker with the instructions in README.md
  2. stop docker
  3. Create a custom config file with url, http_port modified
  4. cp the config to ...docker-cronicle-docker/data/config.json.import
  5. remove the .setup_done file
  6. restart the container

Result:

The port of cronicle did not change, and it says:

Storage has already been set up. There is no need to run this command again.

Workarounds:

Build a image with custom Dockerfile and config.json.import :P

Remove /opt/cronicle/logs/cronicled.pid on start

If the container is killed or the process does not end in time, in the next start Cronicle won't boot.

[1633546671.242][2021-10-06 20:57:51][local-docker][13][Cronicle][debug][1][FATAL ERROR: Process 15 from logs/cronicled.pid is still alive and running.  Aborting startup.][],
[1633546671.241][2021-10-06 20:57:51][local-docker][13][Cronicle][debug][1][WARNING: An old PID File was found: logs/cronicled.pid: 15][]

Could you remove the file /opt/cronicle/logs/cronicled.pid on start or at least make it configurable with an environment variable?

add this to an already existing container

I'm using docker and I don't see the correct way to implement it programatically... so I wouldn't want to have to add a separate container because I don't know how to connect it to the other container where I already have something running before...

Any way to add this to an already existing container with the following previous configuration:

docker-compose.yml:

version: "3.9"

services:

  reverse-proxy:
    env_file:
      - .env
    container_name: Proxy-Server
    image: nginxproxy/nginx-proxy
    restart: always
    depends_on:
      - webserver
    volumes:
      - /var/run/docker.sock:/tmp/docker.sock:ro
      - ./config/ssl:/etc/nginx/certs
    extra_hosts:
      - "lh-stack.dock:127.0.0.1"
      - "pma.lh-stack.dock:127.0.0.1"
    ports:
      - 80:80
      - 443:443
    networks:
      - lamp-network
    environment:
      - TRUST_DOWNSTREAM_PROXY=true
      - ENABLE_WEBSOCKETS=true
    privileged: true
    tty: true

  webserver:
    env_file:
      - .env
    container_name: LH-STACK-Web-Server
    build:
      context: ./bin/php81
      dockerfile: Dockerfile.secure
      args:
        VIRTUAL_HOST: lh-stack.dock
    restart: always
    expose:
      - 80
    networks:
      - lamp-network
    volumes:
      - ./../project:/var/www/html:rw
      - ./../project/public:/var/www/html/public:rw
      - ./config/vhost:/etc/apache2/sites-enabled
      - ./config/php/php.ini:/usr/local/etc/php/php.ini
      - ./config/cron:/etc/cron-task
      - ./log/apache2:/var/log/apache2
      - ./log/cron:/var/log/cron
    environment:
      VIRTUAL_HOST: lh-stack.dock
      LH_WEB_MASTER: [email protected]
      LH_APACHE_DOCUMENT_ROOT: /var/www/html
      LH_DOCUMENT_ROOT: /public
    extra_hosts:
      - "host.docker.internal:host-gateway"
    labels:
      - "lh2.setup.description=Web Server"
      - "lh2.setup.role=webserver"
    privileged: true
    tty: true

Dockerfile:

FROM php:8.1-apache-bullseye

ARG DEBIAN_FRONTEND=noninteractive

RUN apt-get update &&  \
    apt-get upgrade -y --no-install-recommends --fix-missing

RUN apt-get install -y --no-install-recommends --fix-missing tzdata sed build-essential dialog nano apt-utils cron wget git curl zip openssl gettext-base libnss3-tools

RUN apt-get -y autoremove && \
    apt-get clean

RUN a2enmod rewrite 
RUN a2enmod ssl 
RUN a2enmod headers 
RUN a2enmod proxy_wstunnel

RUN service apache2 restart

RUN mkdir -p /var/log/cron && \
    chmod 755 /var/log/cron && \
    touch /var/log/cron/cron.log

CMD cat /etc/cron-task/new-task >> /etc/cron.d/cron-task && \
    chmod 0644 /etc/cron.d/cron-task && \
    cron && \
    /usr/local/bin/apache2-foreground && \
    tail -f /var/log/cron/cron.log

I would like the implementation to be more friendly and viable through environment variables like:

    environment:
      VIRTUAL_HOST: lh-stack.dock
      BASE_APP_URL: lh-stack.dock/cronicle #instead of 3012 use the same like phpmyadmin.

Create Event from Dockerfile

Hi to everyone, I'd like to know if it possible to write a file for an event and copy it during docker build.

In other word, I have the following Dockerfie:

FROM bluet/cronicle-docker
COPY ./plugins /opt/cronicle/plugins
EXPOSE 3012

Is it possible to create event and load to my image like my plugins folder?

Thanks

`.setup_done` file does not get saved to alternative storage locations

I set up Cronicle using this Docker image. However, I am using S3 as the storage backend. Since I am using S3 I am not mounting any data volumes to the container.

The .setup_done file is being created only inside the Docker container. Whenever the container restarts, the setup scripts run and reset all the data which is stored in S3.

Vulnerabilities in Docker image bluet/cronicle-docker:0.8.62

Image: bluet/cronicle-docker:0.8.62
Most of them will be fixed if you update the base Docker image and Cronicle
https://github.com/anchore/grype

grype bluet/cronicle-docker:0.8.62
 ✔ Vulnerability DB        [no update available]
 ✔ Loaded image            
 ✔ Parsed image            
 ✔ Cataloged packages      [486 packages]
 ✔ Scanned image           [74 vulnerabilities]
NAME                    INSTALLED   FIXED-IN    VULNERABILITY        SEVERITY 
@npmcli/arborist        2.6.2       2.8.2       GHSA-2h3h-q99f-3fhc  Medium    
@npmcli/arborist        2.6.2       2.8.2       GHSA-gmw6-94gg-2rc2  Medium    
ansi-regex              3.0.0       5.0.1       GHSA-93q8-gq69-wqmw  Medium    
ansi-regex              3.0.0                   CVE-2021-3807        High      
ansi-regex              5.0.0       5.0.1       GHSA-93q8-gq69-wqmw  Medium    
ansi-regex              5.0.0                   CVE-2021-3807        High      
busybox                 1.33.1-r3   1.33.1-r4   CVE-2021-42374       Medium    
busybox                 1.33.1-r3   1.33.1-r5   CVE-2021-42375       Medium    
busybox                 1.33.1-r3   1.33.1-r6   CVE-2021-42378       High      
busybox                 1.33.1-r3   1.33.1-r6   CVE-2021-42379       High      
busybox                 1.33.1-r3   1.33.1-r6   CVE-2021-42380       High      
busybox                 1.33.1-r3   1.33.1-r6   CVE-2021-42381       High      
busybox                 1.33.1-r3   1.33.1-r6   CVE-2021-42382       High      
busybox                 1.33.1-r3   1.33.1-r6   CVE-2021-42383       High      
busybox                 1.33.1-r3   1.33.1-r6   CVE-2021-42384       High      
busybox                 1.33.1-r3   1.33.1-r6   CVE-2021-42385       High      
busybox                 1.33.1-r3   1.33.1-r6   CVE-2021-42386       High      
cookie                  0.3.1                   CVE-2017-18589       High      
debug                   2.3.3       2.6.9       GHSA-gxpj-cx7g-858c  Low       
debug                   2.3.3                   CVE-2017-16137       Medium    
debug                   2.2.0       2.6.9       GHSA-gxpj-cx7g-858c  Low       
debug                   2.2.0                   CVE-2017-16137       Medium    
engine.io               1.8.3                   CVE-2020-36048       High      
jquery                  3.5.0                   CVE-2007-2379        Medium    
json-schema             0.2.3       0.4.0       GHSA-896r-f27r-55mw  Medium    
json-schema             0.2.3                   CVE-2021-3918        Critical  
lodash                  4.17.21                 GHSA-8p5q-j9m2-g8wr  Low       
minimist                0.0.8       1.2.2       GHSA-7fhm-mqm4-2wp7  Medium    
minimist                0.0.8       0.2.1       GHSA-vh95-rmgr-6w4m  Medium    
minimist                0.0.8                   CVE-2020-7598        Medium    
nodejs                  14.17.6-r0  14.18.1-r0  CVE-2021-22960       Medium    
nodejs                  14.17.6-r0  14.18.1-r0  CVE-2021-22959       Medium    
nodemailer              6.4.16      6.6.1       GHSA-hwqf-gcqm-7353  Medium    
nodemailer              6.4.16                  CVE-2021-23400       High      
npm                     7.17.0                  CVE-2021-43616       Critical  
openssh-client-common   8.6_p1-r2   8.6_p1-r3   CVE-2021-41617       High      
openssh-client-default  8.6_p1-r2   8.6_p1-r3   CVE-2021-41617       High      
openssh-keygen          8.6_p1-r2   8.6_p1-r3   CVE-2021-41617       High      
parsejson               0.0.3                   GHSA-q75g-2496-mxpp  High      
parsejson               0.0.3                   CVE-2017-16113       High      
shell-quote             1.6.1                   CVE-2021-42740       Critical  
socket.io               1.7.3       2.4.0       GHSA-fxwf-4rqh-v8g3  Medium    
socket.io               1.7.3                   CVE-2020-28481       Medium    
socket.io-parser        2.3.1       3.3.2       GHSA-xfhh-g9f5-x4m4  High      
socket.io-parser        2.3.1                   CVE-2020-36049       High      
ssl_client              1.33.1-r3   1.33.1-r4   CVE-2021-42374       Medium    
ssl_client              1.33.1-r3   1.33.1-r5   CVE-2021-42375       Medium    
ssl_client              1.33.1-r3   1.33.1-r6   CVE-2021-42378       High      
ssl_client              1.33.1-r3   1.33.1-r6   CVE-2021-42379       High      
ssl_client              1.33.1-r3   1.33.1-r6   CVE-2021-42380       High      
ssl_client              1.33.1-r3   1.33.1-r6   CVE-2021-42381       High      
ssl_client              1.33.1-r3   1.33.1-r6   CVE-2021-42382       High      
ssl_client              1.33.1-r3   1.33.1-r6   CVE-2021-42383       High      
ssl_client              1.33.1-r3   1.33.1-r6   CVE-2021-42384       High      
ssl_client              1.33.1-r3   1.33.1-r6   CVE-2021-42385       High      
ssl_client              1.33.1-r3   1.33.1-r6   CVE-2021-42386       High      
tar                     6.1.0       6.1.2       GHSA-r628-mhmh-qjhw  High      
tar                     6.1.0       6.1.1       GHSA-3jfq-g458-7qm9  High      
tar                     6.1.0       6.1.9       GHSA-5955-9wpr-37jh  High      
tar                     6.1.0       6.1.9       GHSA-qq89-hq3f-393p  High      
tar                     6.1.0       6.1.7       GHSA-9r2w-394v-53qc  High      
tar                     6.1.0                   CVE-2021-32803       High      
tar                     6.1.0                   CVE-2021-32804       High      
tar                     6.1.0                   CVE-2021-37701       High      
tar                     6.1.0                   CVE-2021-37712       High      
tar                     6.1.0                   CVE-2021-37713       High      
underscore              1.4.4       1.12.1      GHSA-cf4h-3jhx-xvhq  High      
underscore              1.4.4                   CVE-2021-23358       High      
ws                      1.1.2       1.1.5       GHSA-5v72-xg48-5rpm  High      
xmlhttprequest-ssl      1.5.3       1.6.2       GHSA-h4j5-c7cj-74xg  High      
xmlhttprequest-ssl      1.5.3       1.6.1       GHSA-72mh-269x-7mh5  Critical  
xmlhttprequest-ssl      1.5.3                   CVE-2021-31597       Critical  

unable to import cronicle data backup

Per this option on the main repository, you can import a data config to a new master. I tried doing the import from inside the container and got this error.

/opt/cronicle # /opt/cronicle/bin/storage-cli.js import /opt/cronicle/data/cronicle-data-backup.txt
ERROR: Please stop Cronicle before running this script.

If I stop cronicle by running /opt/cronicle/bin/control.sh stop inside the container it kills the container and the import is not completed.

Tag version in Docker images

Thank you for your Docker image. It's really useful for me.
Could you add tags with the Chronicle version?
I'm concern about reinstalling the server and the "latest" label points to a new version and things stop working.
In other projects they add tags like 0.8.61-r0 Chronicle version + revision (if you need to update the image for some reason).

Can this container be used for a worker node?

New to Cronicle. Excited about the possibilities. Was trying to figure out if your container can be used for a second node? My main use case is to use Cronicle to manage the crontab of another machine where the jobs must be run locally due to file system access.

HTTP connection has closed: c540 - caprover

Captura de tela de 2023-12-14 11-22-23

I'm installing it on caprover, even with active websocket the login and password don't appear, they just stay on the "Waiting for master server..." page. Locally on my notebook, it worked perfectly.

023-12-14T14:21:20.332444331Z [1702563680.332][2023-12-14 11:21:20][cc4b1acbcd36][13][WebServer][debug][9][Sending compressed HTTP response with gzip: 200 OK][{"Content-Type":"text/javascript","Access-Control-Allow-Origin":"*","Server":"Cronicle 1.0","Content-Length":90,"Content-Encoding":"gzip"}]
2023-12-14T14:21:20.333433784Z [1702563680.333][2023-12-14 11:21:20][cc4b1acbcd36][13][WebServer][debug][9][Request complete][]
2023-12-14T14:21:20.333700930Z [1702563680.333][2023-12-14 11:21:20][cc4b1acbcd36][13][WebServer][debug][9][Response finished writing to socket][{"id":"r541"}]
2023-12-14T14:21:20.333853923Z [1702563680.333][2023-12-14 11:21:20][cc4b1acbcd36][13][WebServer][debug][9][Request performance metrics:][{"scale":1000,"perf":{"total":9.003,"queue":0.145,"read":0.029,"filter":4.109,"process":0.512,"encode":0.845,"write":1.835},"counters":{"bytes_in":650,"bytes_out":236,"num_requests":1}}]
2023-12-14T14:21:20.333973305Z [1702563680.333][2023-12-14 11:21:20][cc4b1acbcd36][13][WebServer][debug][9][Keeping socket open for keep-alives: c540][]
2023-12-14T14:21:20.334753974Z [1702563680.334][2023-12-14 11:21:20][cc4b1acbcd36][13][WebServer][debug][8][HTTP connection has closed: c540][{"ip":"::ffff:10.0.1.8","port":3012,"total_elapsed":12,"num_requests":1,"bytes_in":650,"bytes_out":236}]

Do I need to put a specific env var for it to work in production?

Cannot set the hostname under config.json using environment variables.

When I run the following command:

# docker run --name cronicle --hostname cronicle -p 3012:3012 -p 443:443 -e CRONICLE_base_app_url='http://cronicle.test.com:3012' bluet/cronicle-docker:latest

I am able to access Cronicle using the IP address.

However, I noticed that the based url setting in config.json still reads localhost:3012. Am I pushing the variable correctly?

Sending the docker run logs in the following message...

Thanks

Document hostname option in docker

In Docker the hostname is random unless you set it. After restarting Cronicle I experienced this issue => jhuckaby/Cronicle#36

You should update the "docker run" command to include -h local-docker. Maybe you want to include a reference docker-compose too.

My docker-compose.yml

version: '3.8'
services:
  cronicle:
    image: bluet/cronicle-docker:latest
    container_name: cronicle
    hostname: local-docker
    environment:
      - TZ=Europe/Madrid
      # - CRONICLE_base_app_url=http://cronicle.home/
    ports:
      - "3012:3012"
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
      - /root/docker/cronicle_volume/data:/opt/cronicle/data
      - /root/docker/cronicle_volume/logs:/opt/cronicle/logs
      - /root/docker/cronicle_volume/plugins:/opt/cronicle/plugins
      - /root/docker/cronicle_volume/app:/app
    restart: unless-stopped

Flickering issue while setting up multiserver

Hi @bluet I'd setup 2 cronicle docker container (for setting up mutiserver conf). One is running on port 3012 and other is running on 3013.
Everthing is working fine except there is flickering occurring in one of the docker setup of cronicle(mine happening in 3013 port).

Can somebody please help here?

python script

can't run python scripts.
can you help me?
It gives me access denied
this is my script

#! / bin python2.7

print ('try')

Slow boot

Hello, every time the container boot it is really show to access it by the browser the first time.

It will be on "Waiting for master server..." for long time, 1/3 minutes.

Is it known? There is something i can do?

Thank you

Publish Docker images for ARM architecture

You can publish images for other architectures. For example, RaspberryPi has an ARM processor.

Run:

docker buildx create --use
docker buildx build -t bluet/cronicle-docker:0.8.62-1 --platform linux/386,linux/amd64,linux/arm/v5,linux/arm/v7,linux/arm64/v8,linux/mips64le,linux/ppc64le,linux/s390x --push .

You will see something like that in DockerHub
image

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.