Microservices
│
├── UserService/
│
├── TokenService/
│
├── InventoryService/
│
└── OrderService/
-
UserService: Manages the user registration and login. It's responsible for user authentication and authorization.
-
TokenService: Manages the user's balance and the user's purchase token. It's responsible for the user's balance and the user's purchase token.
-
InventoryService: Manages the books and the inventory. It's responsible for the books and the inventory.
-
OrderService: Manages the orders and the order processing. It's responsible for the orders and the order processing.
-
.NET 8.0: The services are built with .NET 8.0.
-
Docker: The services are containerized with Docker.
-
Docker Compose: The services are orchestrated with Docker Compose.
-
PostgreSQL: The database is PostgreSQL.
-
Redis: The cache is Redis.
-
Kafka: The message broker is Kafka.
-
Swagger: The services are documented with Swagger.
-
xUnit: The tests are written with xUnit.
-
The projects are build in a debian linux distro, so it's better to run them in a linux environment.
git clone https://github.com/mojbaba/bookstore.git
cd bookstore
There are integration tests which test the services in isolation but with the real database, redis and kafka. They test each service use case and the flow of the services.
- Integration tests use TestContainers to run the real database, redis and kafka in the docker containers. So you need to have docker and docker-compose installed.
there should be lots of unit tests for each service, but I didn't have enough time to implement them.
dotnet test
git clone https://github.com/mojbaba/bookstore.git
cd bookstore
docker compose --file docker-compose.infra.yml up -d
if you get network error, you can create a network with the following command:
docker network create bookstore
docker compose --file docker-compose.migrations.yml up -d
docker compose --file docker-compose.apis.yml up -d
-
User Service : http://localhost:8081
-
Inventory Service: http://localhost:808
-
Token Service: http://localhost:808
-
Order Service: http://localhost:808
it was better to have a single entry point for all services, but I didn't have enough time to implement it. (Nginx, Ocelot, etc.)
on Swagger UI (UserService -> register)
{
"email": "[email protected]",
"password": "string"
}
on Swagger UI (UserService -> login)
{
"email": "[email protected]",
"password": "string"
}
and get a token
{
"email": "[email protected]",
"token": "{JWT_TOKEN}"
}
on Swagger UI (InventoryService -> create)
{
"title": "Book store micoservice architecure",
"author": "Mojbaba",
"price": 20,
"amount": 1
}
get the book id
{
"bookId": "{GUID}"
}
on Swagger UI use Authorize
button and add the token {JWT_TOKEN}
got from step 8.
on Swagger UI (TokenService -> add)
{
"amount": 1500
}
on Swagger UI use Authorize
button and add the token {JWT_TOKEN}
got from step 8.
on Swagger UI (OrderService -> create)
{
"bookIds": [
"{the book id got from step 9}"
]
}
on Swagger UI (OrderService -> Admin Orders)
you can track the order statuses.
"isPaymentProcessed": true
and "isInventoryProcessed": true
means the order is processed successfully.
"isPaymentProcessed": true
means the TokenService got the OrderCreatedKafkaEvent and deducted the amount from the user's account
[
{
"id": "string",
"userId": "string",
"status": 0,
"createdAt": "2024-02-24T03:44:56.287Z",
"isPaymentProcessed": true,
"isInventoryProcessed": true,
"failReason": "string"
}
]
on Swagger UI (TokenService -> balance) (must authorize with the token got from step 8)
{
"userId": "{GUID}",
"balance": {NEW_BALANCE}
}
the user's balance is deducted by 1500 after the order is processed.
the inventory service is not completely implemented, so the inventory is not deducted after the order is processed. it just accepts the order anyway and published success packed books event
-
In real world, the services should be deployed in the kubernetes cluster.
-
Each service repository should be separated and have its own CI/CD pipeline.
-
The project with the name
BookStore.*
are the libraries and the shared code between the services. they should be separated and have their own CI/CD pipeline. and should be published to the private nuget repository. (I dont like to use someCommon
orShared
libraries, because they are not maintainable and they are not designed to be used in the other projects. they are just a bunch of code that are not related to each other.) -
The authorization are implemented with JWT. so when user log out, the token is blacklisted and the user can't use it anymore. the other services get notified by the the Kafka event.
-
The blacklisted tokens are stored in the Redis cache with some expiration time.
-
The services are not completely implemented, so there are some missing parts and bugs.
-
Services are totally stateless and can be scaled horizontally.
-
API Gateway, Circuit Breaker, Rate Limiting, etc. are not implemented.
-
Notifications, Logging, Monitoring, etc. are not implemented.
-
It can be implemented with gRPC instead of REST inter-comunications.
-
The services are not Completly DDD, CQRS, Event Sourcing, but they are designed to be implemented with these patterns.
-
The event log consumers are the default kafka consumers, which means they are garantee at least once delivery. but each service stores the events in the database and checks the event id to prevent duplicate processing.