Git Product home page Git Product logo

Comments (7)

D10S0VSkY-OSS avatar D10S0VSkY-OSS commented on July 20, 2024 2

@dehengxu
Okay, I understand. I will release a hotfix as soon as possible.

from stack-lifecycle-deployment.

dehengxu avatar dehengxu commented on July 20, 2024 1

Both in module src.shared.helpers.get_data and src.worker.tasks.terraform_worker, use redis as result backend

r = redis.Redis(
    host=settings.BACKEND_SERVER,
    port=6379,
    db=settings.BACKEND_DB,
    charset="utf-8",
    decode_responses=True,
)

this always create redis client, event I change BACKEND config to mysql , it connect failed

Do you mean this is should be BROKER_SERVER, BROKER_DB ?

from stack-lifecycle-deployment.

D10S0VSkY-OSS avatar D10S0VSkY-OSS commented on July 20, 2024 1

Hi @dehengxu
It has been fixed in version v2.26.1.
Now the cache configuration is managed through these variables:

CACHE_USER: str = os.getenv("SLD_CACHE_USER", "")
CACHE_PASSWD: str = os.getenv("SLD_CACHE_PASSWD", "")
CACHE_SERVER: str = os.getenv("SLD_CACHE_SERVER", "redis")

Thank you very much for your collaboration, and I hope you can continue contributing.
Regards

from stack-lifecycle-deployment.

D10S0VSkY-OSS avatar D10S0VSkY-OSS commented on July 20, 2024

Hi @dehengxu
thanks a lot for your feedback!
The result backend is changed to use the default database, this will add new tables to the database such as celery_taskmeta. It's important to change the size of the blob in the result column if you work with Terraform stacks with multiple resources.

mysql> desc celery_taskmeta;
+-----------+--------------+------+-----+---------+----------------+
| Field     | Type         | Null | Key | Default | Extra          |
+-----------+--------------+------+-----+---------+----------------+
| id        | int          | NO   | PRI | NULL    | auto_increment |
| task_id   | varchar(155) | YES  | UNI | NULL    |                |
| status    | varchar(50)  | YES  |     | NULL    |                |
| result    | blob         | YES  |     | NULL    |                |
| date_done | datetime     | YES  |     | NULL    |                |
| traceback | text         | YES  |     | NULL    |                |
| name      | varchar(155) | YES  |     | NULL    |                |
| args      | blob         | YES  |     | NULL    |                |
| kwargs    | blob         | YES  |     | NULL    |                |
| worker    | varchar(155) | YES  |     | NULL    |                |
| retries   | int          | YES  |     | NULL    |                |
| queue     | varchar(155) | YES  |     | NULL    |                |
+-----------+--------------+------+-----+---------+----------------+
12 rows in set (0.00 sec)

Command for change size

ALTER TABLE celery_taskmeta MODIFY result MEDIUMBLOB;
ALTER TABLE celery_taskmeta MODIFY result LONGBLOB;

MEDIUMBLOB: Can store up to 16MB of data.
LONGBLOB: Can store up to 4GB of data.

from stack-lifecycle-deployment.

dehengxu avatar dehengxu commented on July 20, 2024

Thanks, I'll try it

from stack-lifecycle-deployment.

dehengxu avatar dehengxu commented on July 20, 2024

I saw your update v2.25.0, worker celery use mysql as backend, but api-backend get_data.py also use redis as backend. Should api-backend read from redis ?

I config backend to db+mysql for api-backend, but it connect refused. Because get_data.py only read task status from redis.

from stack-lifecycle-deployment.

D10S0VSkY-OSS avatar D10S0VSkY-OSS commented on July 20, 2024

HI @dehengxu
Both the workers and the API backend use MySQL as the result backend. When the configuration is changed, it affects both. However, Redis is also used as both a broker, a cache and a "semaphore" to prevent race conditions. The latter is the case in ./src/shared/helpers/get_data.py. To alter the behavior of tasks, it is done through Celery in sld-api-backend/config/celery_config.py, which is designed for you to pass environment variables to modify it.

ROKER_USER = os.getenv("BROKER_USER", "")
BROKER_PASSWD = os.getenv("BROKER_PASSWD", "")
BROKER_SERVER = os.getenv("BROKER_SERVER", "redis")  # use rabbit or redis
BROKER_SERVER_PORT = os.getenv(
    "BROKER_SERVER_PORT", "6379"
)  # use por 6379 for redis or 5672 for RabbitMQ
BROKER_TYPE = os.getenv("BROKER_TYPE", "redis")  # use amqp for RabbitMQ or redis
# Redus backend config
BACKEND_TYPE = os.getenv("BACKEND_TYPE", "db+mysql")
BACKEND_USER = os.getenv("BACKEND_USER", "root")
BACKEND_PASSWD = os.getenv("BACKEND_PASSWD", "123")
BACKEND_SERVER = os.getenv("BACKEND_SERVER", "db")
BACKEND_DB = os.getenv("BACKEND_DB", "restapi")`

Regards

from stack-lifecycle-deployment.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.