Git Product home page Git Product logo

Comments (11)

s3rius avatar s3rius commented on May 18, 2024 1

I need to upload vme_server image to the Dockerhub, and then load this image in AWS through methods such as ECS or Fargate

Yes

Do I need to upload the actual docker composite.yml to AWS?

No. It's only for local development.

Can I package this database through Amazon RDS

How can you package through RDS? If you just want to use RDS, please deploy RDS, then configure your application to use it. RDS itself is postgresql compatible.

from fastapi-template.

gpkc avatar gpkc commented on May 18, 2024

Since you're using containers, ECS can be a good option if you drop the Postgres deployment and use RDS instead. I think Terraform would be a good way to set up both ECS and RDS, but still will require some VPC setup which might be outside the scope of this. Otherwise, the kubernetes version should be compatible with EKS just fine.

from fastapi-template.

vmjcv avatar vmjcv commented on May 18, 2024

@s3rius @gpkc
I used Fastapis_template has created my FastAPI app, and I now want to deploy it to AWS.

What should I do to use ECS or Fargate?

Can you help me, or any example projects? Thank you very much for your help

from fastapi-template.

s3rius avatar s3rius commented on May 18, 2024

Hi! This template comes with Dockerfile, and you can easily build an image with your application inside. After the image is ready, you can deploy it.

  1. Build the image with docker compose build -f deploy/docker-compose.yaml
  2. Get the image name from docker-compose.yaml.
  3. Create a custom tag for your image. docker tag "previous_name:version" "myregistry.aws/application-name:version"
  4. Upload the image in Some container registry like Dockerhub or your private one using docker push "myregistry.aws/application-name:version"
  5. Deploy it.

from fastapi-template.

vmjcv avatar vmjcv commented on May 18, 2024

thanks your help
This is the example I am currently running.
image
image

According to your statement, can I understand that I need to upload vme_server image to the Dockerhub, and then load this image in AWS through methods such as ECS or Fargate? Do I need to upload the actual docker composite.yml to AWS?

Secondly, I am currently using Postgres. Can I package this database through Amazon RDS

from fastapi-template.

vmjcv avatar vmjcv commented on May 18, 2024

If I only upload vme_ Server image and do not upload composite.yml.

So how does AWS determine that I need to open Redis, DB, and RMQ services?

from fastapi-template.

gpkc avatar gpkc commented on May 18, 2024

@vmjcv

If you're deploying to AWS / ECS, there is no reason to use Dockerhub. Simply upload to AWS ECR (Elastic Container Registry). If you want to use Dockerhub, you'd ideally have to make your images private otherwise others will be able to access them, and then it's extra steps on making AWS be able to access it. To upload to ECR you need to configure your AWS CLI to have access to ECR.

There is no special setting you need to do to your fastapi app in order for it to be usable on AWS ECS. If it runs the webserver with uvicorn then it's pretty much good to go.

Once you do that, then run these steps:

  • create an ECS cluster, set it as fargate
  • create a task definition and point it to the image you've uploaded to ECR, also setup any environmental variables your container should have access to, for example secrets. There's other extra settings you need to do here such as ports etc. Also set the task definition as fargate
  • create an ECS service, use launch type as fargate in compute options, select the task definition you created above, here you need to configure also the network with subnets and such, which you have set up as you create your VPC, and also attach a load balancer to the app if you want to have a fixed URL/domain for it (load balancer costs extra but I think it's included on free tier)

from fastapi-template.

vmjcv avatar vmjcv commented on May 18, 2024

@vmjcv

如果您要部署到 AWS/ECS,则没有理由使用 Dockerhub。只需上传到 AWS ECR(Elastic Container Registry)。如果您想使用 Dockerhub,理想情况下,您必须将映像设为私有,否则其他人将能够访问它们,然后是使 AWS 能够访问它的额外步骤。要上传到 ECR,您需要将 AWS CLI 配置为能够访问 ECR。

您无需对 fastapi 应用程序进行任何特殊设置,即可在 AWS ECS 上使用。如果它使用 uvicorn 运行网络服务器,那么它就很好了。

执行此操作后,请运行以下步骤:

  • 创建 ECS 集群,将其设置为 fargate
  • 创建一个任务定义并将其指向已上传到 ECR 的映像,并设置容器应有权访问的任何环境变量,例如机密。这里还需要进行其他额外设置,例如端口等。同时将任务定义设置为 fargate
  • 创建一个 ECS 服务,在计算选项中使用启动类型作为 Fargate,选择您在上面创建的任务定义,在这里您还需要配置具有子网等的网络,这些子网等是您在创建 VPC 时设置的,如果您想为它提供固定的 URL/域,还需要将负载均衡器附加到应用程序(负载均衡器需要额外付费,但我认为它包含在免费套餐中)

thanks your help, I will try your approach

from fastapi-template.

vmjcv avatar vmjcv commented on May 18, 2024

@gpkc @s3rius
Thank you very much for your help. I successfully deployed my FastAPI application on AWS based on your plan

But it has to be said that the steps are still very cumbersome and complex. I will describe the steps I have completed below, hoping to provide some help to those in need in the future.

The following steps are only for AWS, ecr, ECS, and Fargate

  1. Configure the AWS environment

    • install aws cli : http://aws.amazon.com/tools/
    • install aws ecr cli in powershell:
      • Install-Module -Name AWS.Tools.Installer -Force
      • Install-AWSToolsModule AWS.Tools.EC2,AWS.Tools.S3 -CleanUp
      • Install-AWSToolsModule AWS.Tools.ECR
    • login ecr with powershell
      • (Get-ECRLoginCommand).Password | docker login --username AWS --password-stdin 044907426648.dkr.ecr.ap-southeast-1.amazonaws.com (Note that you need to create an ecr repository in advance,Then obtain your actual repository address from the ecr repository)
  2. Package and upload images

  • docker build --no-cache -t am_platform . -f deploy/Dockerfile
  • docker tag am_platform:latest 044907426648.dkr.ecr.ap-southeast-1.amazonaws.com/am_platform:latest (Please use the actual address)
  • docker push 044907426648.dkr.ecr.ap-southeast-1.amazonaws.com/am_platform:latest (Please use the actual address)
  1. Modify the local Docker Compose file (Because AWS ECS only supports Docker Compose 3.0 version) (I list the steps that may need to be modified)
    • Change version: '3.9' to version: '3.0'
    • delete "build" command
    • delete 'depend on' long instruction
    • delete "healthcheck"
    • delete volumes name
    • delete hostname
    • change env_file : .env to ../.env
      The configuration before and after the final modification is
      Before modification:
version: '3.9'

services:
  api: &main_app
    build:
      context: .
      dockerfile: ./deploy/Dockerfile
      target: prod
    image: am_platform:${AM_PLATFORM_VERSION:-latest}
    restart: always
    env_file:
      - .env
    depends_on:
      db:
        condition: service_healthy
      redis:
        condition: service_healthy
      rmq:
        condition: service_healthy
    environment:
      AM_PLATFORM_HOST: 0.0.0.0
      AM_PLATFORM_DB_HOST: am_platform-db
      AM_PLATFORM_DB_PORT: 5432
      AM_PLATFORM_DB_USER: am_platform
      AM_PLATFORM_DB_PASS: am_platform
      AM_PLATFORM_DB_BASE: am_platform
      AM_PLATFORM_RABBIT_HOST: am_platform-rmq
      AM_PLATFORM_REDIS_HOST: am_platform-redis

  db:
    image: postgres:13.8-bullseye
    hostname: am_platform-db
    environment:
      POSTGRES_PASSWORD: "am_platform"
      POSTGRES_USER: "am_platform"
      POSTGRES_DB: "am_platform"
    volumes:
      - am_platform-db-data:/var/lib/postgresql/data
    restart: always
    healthcheck:
      test: pg_isready -U am_platform
      interval: 2s
      timeout: 3s
      retries: 40

  migrator:
    image: am_platform:${AM_PLATFORM_VERSION:-latest}
    restart: "no"
    command: alembic upgrade head
    environment:
      AM_PLATFORM_DB_HOST: am_platform-db
      AM_PLATFORM_DB_PORT: 5432
      AM_PLATFORM_DB_USER: am_platform
      AM_PLATFORM_DB_PASS: am_platform
      AM_PLATFORM_DB_BASE: am_platform
    depends_on:
      db:
        condition: service_healthy

  redis:
    image: bitnami/redis:6.2.5
    hostname: "am_platform-redis"
    restart: always
    environment:
      ALLOW_EMPTY_PASSWORD: "yes"
    healthcheck:
      test: redis-cli ping
      interval: 1s
      timeout: 3s
      retries: 50

  rmq:
    image: rabbitmq:3.9.16-alpine
    hostname: "am_platform-rmq"
    restart: always
    environment:
      RABBITMQ_DEFAULT_USER: "guest"
      RABBITMQ_DEFAULT_PASS: "guest"
      RABBITMQ_DEFAULT_VHOST: "/"
    healthcheck:
      test: rabbitmq-diagnostics check_running -q
      interval: 3s
      timeout: 3s
      retries: 50



volumes:
  am_platform-db-data:
    name: am_platform-db-data

After modification:

version: '3'

services:
  api:
    image: am_platform:${AM_PLATFORM_VERSION:-latest}
    restart: always
    env_file:
      - ../.env
    depends_on:
      - db
      - redis
      - rmq
    environment:
      AM_PLATFORM_HOST: 0.0.0.0
      AM_PLATFORM_DB_HOST: am_platform-db
      AM_PLATFORM_DB_PORT: 5432
      AM_PLATFORM_DB_USER: am_platform
      AM_PLATFORM_DB_PASS: am_platform
      AM_PLATFORM_DB_BASE: am_platform
      AM_PLATFORM_RABBIT_HOST: am_platform-rmq
      AM_PLATFORM_REDIS_HOST: am_platform-redis

  db:
    image: postgres:13.8-bullseye
    environment:
      POSTGRES_PASSWORD: "am_platform"
      POSTGRES_USER: "am_platform"
      POSTGRES_DB: "am_platform"
    volumes:
      - am_platform-db-data:/var/lib/postgresql/data
    restart: always

  migrator:
    image: am_platform:${AM_PLATFORM_VERSION:-latest}
    restart: "no"
    command: alembic upgrade head
    environment:
      AM_PLATFORM_DB_HOST: am_platform-db
      AM_PLATFORM_DB_PORT: 5432
      AM_PLATFORM_DB_USER: am_platform
      AM_PLATFORM_DB_PASS: am_platform
      AM_PLATFORM_DB_BASE: am_platform
    depends_on:
      - db

  redis:
    image: bitnami/redis:6.2.5
    restart: always
    environment:
      ALLOW_EMPTY_PASSWORD: "yes"

  rmq:
    image: rabbitmq:3.9.16-alpine
    restart: always
    environment:
      RABBITMQ_DEFAULT_USER: "guest"
      RABBITMQ_DEFAULT_PASS: "guest"
      RABBITMQ_DEFAULT_VHOST: "/"

volumes:
  am_platform-db-data:
  1. Create an AWS task using a Docker Compose file

    • ecs-cli compose -f deploy/docker-compose.yml create
  2. Modify some task definitions from the AWS platform

    • Visit your AWS ECS webpage
    • change the task definitions:
      • image change AWS Fargate
      • image change Linux/ARM64, you can use "docker image inspect am_platform" find this
      • image you need change repository url, add a port map
      • image change AM_PLATFORM_DB_HOST,AM_PLATFORM_RABBIT_HOST,AM_PLATFORM_REDIS_HOST to localhost
      • add health check ( Obtain the health check for each image from the original Docker Compose file and add it)
      • add depend container
      • Modify image URLs such as db, redis, and rmq. you can search this from https://gallery.ecr.aws/ . the change like this postgres:13.8-bullseye to public.ecr.aws/docker/library/postgres:13.13-bullseye
      • Note that if using postgres as the database, the PGUSER field needs to be added:image
      • Add a storage volume like this:image
    • Modify completed task files:
   {
    "family": "am_platform",
    "containerDefinitions": [
        {
            "name": "api",
            "image": "044907426648.dkr.ecr.ap-southeast-1.amazonaws.com/am_platform:latest",
            "cpu": 0,
            "memory": 512,
            "portMappings": [
                {
                    "name": "api-8000-tcp",
                    "containerPort": 8000,
                    "hostPort": 8000,
                    "protocol": "tcp"
                }
            ],
            "essential": true,
            "environment": [
                {
                    "name": "AM_PLATFORM_HOST",
                    "value": "0.0.0.0"
                },
                {
                    "name": "AM_PLATFORM_DB_USER",
                    "value": "am_platform"
                },
                {
                    "name": "AM_PLATFORM_RELOAD",
                    "value": "True"
                },
                {
                    "name": "AM_PLATFORM_DB_PASS",
                    "value": "am_platform"
                },
                {
                    "name": "AM_PLATFORM_DB_PORT",
                    "value": "5432"
                },
                {
                    "name": "AM_PLATFORM_REDIS_HOST",
                    "value": "localhost"
                },
                {
                    "name": "AM_PLATFORM_DB_HOST",
                    "value": "localhost"
                },
                {
                    "name": "AM_PLATFORM_RABBIT_HOST",
                    "value": "localhost"
                },
                {
                    "name": "AM_PLATFORM_DB_BASE",
                    "value": "am_platform"
                },
                {
                    "name": "USERS_SECRET",
                    "value": "\"\""
                }
            ],
            "mountPoints": [],
            "volumesFrom": [],
            "linuxParameters": {
                "capabilities": {}
            },
            "dependsOn": [
                {
                    "containerName": "db",
                    "condition": "HEALTHY"
                },
                {
                    "containerName": "redis",
                    "condition": "HEALTHY"
                }
            ],
            "privileged": false,
            "readonlyRootFilesystem": false,
            "pseudoTerminal": false,
            "logConfiguration": {
                "logDriver": "awslogs",
                "options": {
                    "awslogs-create-group": "true",
                    "awslogs-group": "/ecs/am_platform",
                    "awslogs-region": "ap-southeast-1",
                    "awslogs-stream-prefix": "ecs"
                },
                "secretOptions": []
            }
        },
        {
            "name": "db",
            "image": "public.ecr.aws/docker/library/postgres:13.13-bullseye",
            "cpu": 0,
            "memory": 512,
            "portMappings": [],
            "essential": true,
            "environment": [
                {
                    "name": "POSTGRES_USER",
                    "value": "am_platform"
                },
                {
                    "name": "POSTGRES_PASSWORD",
                    "value": "am_platform"
                },
                {
                    "name": "POSTGRES_DB",
                    "value": "am_platform"
                },
                {
                    "name": "PGUSER",
                    "value": "am_platform"
                }
            ],
            "mountPoints": [
                {
                    "sourceVolume": "am_platform-db-data",
                    "containerPath": "/var/lib/postgresql/data",
                    "readOnly": false
                }
            ],
            "volumesFrom": [],
            "linuxParameters": {
                "capabilities": {}
            },
            "privileged": false,
            "readonlyRootFilesystem": false,
            "pseudoTerminal": false,
            "logConfiguration": {
                "logDriver": "awslogs",
                "options": {
                    "awslogs-create-group": "true",
                    "awslogs-group": "/ecs/am_platform",
                    "awslogs-region": "ap-southeast-1",
                    "awslogs-stream-prefix": "ecs"
                },
                "secretOptions": []
            },
            "healthCheck": {
                "command": [
                    "CMD-SHELL",
                    "pg_isready -U am_platform"
                ],
                "interval": 5,
                "timeout": 3,
                "retries": 10
            }
        },
        {
            "name": "redis",
            "image": "public.ecr.aws/bitnami/redis:6.2.14",
            "cpu": 0,
            "memory": 512,
            "portMappings": [],
            "essential": true,
            "environment": [
                {
                    "name": "ALLOW_EMPTY_PASSWORD",
                    "value": "yes"
                }
            ],
            "mountPoints": [],
            "volumesFrom": [],
            "linuxParameters": {
                "capabilities": {}
            },
            "privileged": false,
            "readonlyRootFilesystem": false,
            "pseudoTerminal": false,
            "logConfiguration": {
                "logDriver": "awslogs",
                "options": {
                    "awslogs-create-group": "true",
                    "awslogs-group": "/ecs/am_platform",
                    "awslogs-region": "ap-southeast-1",
                    "awslogs-stream-prefix": "ecs"
                },
                "secretOptions": []
            },
            "healthCheck": {
                "command": [
                    "CMD-SHELL",
                    "redis-cli ping"
                ],
                "interval": 5,
                "timeout": 3,
                "retries": 10
            }
        },
        {
            "name": "rmq",
            "image": "public.ecr.aws/docker/library/rabbitmq:3.9.29-alpine",
            "cpu": 0,
            "memory": 512,
            "portMappings": [],
            "essential": true,
            "environment": [
                {
                    "name": "RABBITMQ_DEFAULT_PASS",
                    "value": "guest"
                },
                {
                    "name": "RABBITMQ_DEFAULT_USER",
                    "value": "guest"
                },
                {
                    "name": "RABBITMQ_DEFAULT_VHOST",
                    "value": "/"
                }
            ],
            "mountPoints": [],
            "volumesFrom": [],
            "linuxParameters": {
                "capabilities": {}
            },
            "privileged": false,
            "readonlyRootFilesystem": false,
            "pseudoTerminal": false,
            "logConfiguration": {
                "logDriver": "awslogs",
                "options": {
                    "awslogs-create-group": "true",
                    "awslogs-group": "/ecs/am_platform",
                    "awslogs-region": "ap-southeast-1",
                    "awslogs-stream-prefix": "ecs"
                },
                "secretOptions": []
            },
            "healthCheck": {
                "command": [
                    "CMD-SHELL",
                    "rabbitmq-diagnostics check_running -q"
                ],
                "interval": 5,
                "timeout": 3,
                "retries": 10
            }
        }
    ],
    "taskRoleArn": "arn:aws:iam::044907426648:role/ecsTaskExecutionRole",
    "executionRoleArn": "arn:aws:iam::044907426648:role/ecsTaskExecutionRole",
    "networkMode": "awsvpc",
    "volumes": [
        {
            "name": "am_platform-db-data",
            "host": {}
        }
    ],
    "requiresCompatibilities": [
        "FARGATE"
    ],
    "cpu": "512",
    "memory": "3072",
    "runtimePlatform": {
        "cpuArchitecture": "X86_64",
        "operatingSystemFamily": "LINUX"
    }
}
  1. Use this task to start a service, (Pay attention to security group configuration. Only by configuring the appropriate security group can you access it through the internet)

PS: The operation includes both AWS cli and AWS web page. In theory, you can do all the above operations in AWS cli, but I am not very proficient, so I mixed the two to achieve my deployment goal

from fastapi-template.

gpkc avatar gpkc commented on May 18, 2024

@vmjcv It seems you're trying to deploy databases to AWS ECS. That's not a good idea, AWS ECS is better suited for stateless services and databases are obviously stateful. You will lose your data.

Also you're having all of that as a single task definition, which means that when you deploy a new version of your app, you will redeploy everything, and for some time they might even run in parallel (a huge mess) because of rollout strategy. In container orchestrators such as ECS and Kubernetes, the unit of work isn't a container. In kubernetes, it's a pod (so each pod should be a service). In ECS it's a task, so each task should be an independent service. So you should have your app alone in a single task, and that will also allow you to scale it if needed.

The mental model of docker compose doesn't translate directly to container orchestrators such as those, and the deployment strategies are different.

I also don't recommend having a "migrator" job in general, because:

1- database migrations aren't generally concurrency-safe. If your migrations are all very simple (alter table and such) then it might not be an issue.
2- your migration might run after your app starts running, meaning your app will start with error

from fastapi-template.

vmjcv avatar vmjcv commented on May 18, 2024

@gpkc
Currently, the database is directly deployed to AWS ECS, but I will replace it with AWS RDS in the future.

My step is just to serve as a basic method for those who will find this problem in the future, and how to actually deploy the project generated by the fastAPI template on AWS

Also you're having all of that as a single task definition, which means that when you deploy a new version of your app,

Can I understand that I need to deploy Redis, RMQ, and DB as separate tasks?

from fastapi-template.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.