Git Product home page Git Product logo

saastack's Introduction

Backend API ReadyEnough Frontend WebApp Under Construction Build and Test

SaaStack

Are you about to build a new SaaS product from scratch? On .NET?

Then, try starting with SaaStack codebase template.

It is a complete template for building real-world, fully featured SaaS web products.

Ready to build, test, and deploy into a cloud provider of your choice (e.g., Azure, AWS, Google Cloud, etc.)

Don't spend months building all this stuff from scratch. You and your team don't need to. We've done all that for you already; just take a look, see what is already there, and take it from here. You can always change it the way you like it as you proceed. You are not locked into anyone else's framework.

This is not some code sample like those you would download to learn a new technology or see in demos online. This is way more comprehensive, way more contextualized, and way more realistic about the complexities you are going to encounter in reality. This template contains a partial (but fully functional) SaaS product that you can deploy from day one and start building your product on. But it is not yet complete. That next part is up to you.

The codebase demonstrates common architectural styles that you are going to need in your product in the long run, such as:

  • A Pluggable Modular-Monolith - always build a monolith first, then separate out to micro-services later if you need to
  • Clean Architecture, Onion Architecture, and Hexagonal Architecture all have the same principles - low-coupling, high-cohesion, a shareable and protected domain at the center
  • Hosted behind a distributed REST API, or in a CLI, (or in another executable).
  • Domain Driven Design (with Aggregates and Domain Events) - modeling actual real-world behaviors, not modeling just anemic data
  • Event Sourcing - because you cannot predict upfront when you will need historical data later, and when you do, will be stuck, also makes domain events a cinch
  • Event-Driven Architecture - to keep your modules de-coupled, distributed, and asynchronous from each other, focused on meaningful events across your product
  • Polyglot Persistence - decouples you from infrastructure, makes your entire system easy to test, and then upgrades as your business scales later
  • Extensive Test Automation (e.g., Unit, Integration, and E2E) so you can keep moving years into the future
  • B2B or B2C Multitenancy, you choose
  • Extensibility for all integrations with any 3rd party provider (e.g., Stripe, Twilio, LaunchDarkly, etc.) - because you want to start cheaply, and change over time as your new business changes and grows.

The fundamental design principle behind this particular combination of architectural styles is to maximize change since it is change that you need to make efficient to succeed in SaaS startups. It is the cost of change in software that determines the cost of designing and building software in the long run.

This starter template gives you most of the things all SaaS products will need from day one while maximizing your ability to evolve the specific behaviors and infrastructure components of your specific product - for the long run (i.e., over the course of the next 1-5 years).

What is in the box?

Azure

or if you prefer AWS:

AWS

How is the code structured?

The best experience for working with this template is in an IDE like JetBrains Rider, or Visual Studio, or Visual Studio Code (opening the solution file).

However, if working in an IDE is not your team's thing, then you can also rearrange the project folders into whatever structure you like. It is a starter template after all.

Solution

Who is it for?

This starter template is NOT for everyone, nor for EVERY software project, nor for EVERY skill level.

We need to say that because all software products are different, there is not one silver bullet for all of them.

  • The people using this template must have some experience applying "first principles" of building new software products from scratch because it is a starter template that can (and should) be modified to suit your context. (It is a far better starting point than building everything from scratch again. You need to understand the principles, not have to rewrite them all over again!).

  • The tech stack is a .NET core backend (LTS version 8.0 or later) written in C#, using (a few) but very popular and well-supported 3rd party libraries. (We've worked very hard to find a balance between too few and far too many).

  • This starter template deliberately makes engineering trade-offs that are optimized for situations where:

    1. High maintainability is super important to you over long periods of time (e.g., long-lived codebases)
    2. Managing complexity over long periods of time is non-negotiable (~1-10 years), and avoiding big balls of mud (BBOMs) is paramount to you,
    3. Where many hands will touch the codebase (i.e., over the course of its entire life). Of course, if you are working alone on a project, you will have personal preferences, free from the practical constraints of working in teams.

What is it for?

The kinds of 'known scenarios' that this template is designed specifically for:

  • Tech SaaS startups building their product from scratch
  • or experienced developers who are very familiar with these patterns and concepts and wish to adapt them to their context

Can you use this template if your context is different?

  • Yes, you can, but you need to be aware of why the trade-offs have been made in the way they have been made, then adapt them to your needs

Are these trade-offs suitable for any kind of software project?

  • No, they are not.
    • However, some of them may fit your specific context well.

Want to know what the initial design constraints, assumptions, and trade-offs are, then see our Decisions Log and Design Principles for more details on that.

What does it give you?

It is a starter "template," not a 3rd party library or a fancy 3rd party framework. Once you clone it it is all yours:

  • You copy this codebase, as is, as your new codebase for your product.
  • You rename a few things to the name of your product.
  • You compile it, you run its tests, and you deploy its pieces into your cloud environment (e.g., Azure, AWS, or Google Cloud).
  • You then continue to evolve and add your own features to it (by following the established code patterns).
  • You then evolve and adapt the code to wherever you need it to go.
    • Don't like those patterns? then change them to suit your preferences. There are no rigid frameworks or other dev teams to plead with.
  • At some point, you will delete the example subdomain modules (Cars and Bookings) that are provided as examples to follow and, of course, replace them with your own subdomain modules.
  • Read the documentation to figure out what it already has and how things work.
    • So that you either don't need to worry about those specific things yet (and can focus on more valuable things), or you can modify them to suit your specific needs. It is your code, so you do as you please to it.

Since this starter "template" is NOT a framework (of the type you usually depend on from others downloaded from nuget.org), you are free from being trapped inside other people's abstractions and regimes and then waiting on them to accommodate your specific needs.

With this template, all you need to do: is (1) understand the code here, (2) change the code to fit your needs, (3) update the tests that cover those changes, and (4) move on. Just like you do with any and all the code you write when you join a new company, team or project. It is no different to that.

Want it to scale?

What happens when the performance of this modular monolith requires that you MUST scale it out, and break it into independently deployable pieces?

Remember: No business can afford the expense for you to re-write your product, - so forget that idea!

This codebase has been explicitly designed so that you can split it up and deploy its various modules into separate deployable units as you see fit (when your product is ready for that).

Unlike a traditional monolithic codebase (i.e., a single deployable unit), all modules in this Modular Monolith codebase have been designed (and enforced) to be de-coupled and deployed independently in the future.

You just have to decide which modules belong in which deployed components, wire things up correctly (in the DI), and you can deploy them separately.

No more re-builds and extensive re-engineering to build a new distributed codebase when the time comes. It is all already in there for that future date.

What does it contain?

It is a fully functioning and tested system with some common "base" functionality.

It demonstrates a working example of a made-up SaaS car-sharing platform just for demonstration purposes.

You would, of course, replace that stuff with your own code! It is only there to demonstrate real code examples you can learn from.

The starter template also takes care of these specific kinds of things:

  • Deployment
    • It can be deployed in Azure (e.g., App Services or Functions) or in AWS (e.g., EC2 instances or Lambdas)
    • It is designed to be split into as many deployable pieces as you want when needed. (You simply replace the "RPC adapters" with "HttpClient adapters").
  • REST API
    • It defines a ruleset about how JSON is represented on the wire and how requests are deserialized (to cope with different client styles)
    • It localizes developer errors
    • It handles and maps common exceptions to standard HTTP status codes
    • It returns standard HTTP statuses for successful requests based on the HTTP method (e.g., GET = 200, POST = 201, PUT = 202, DELETE = 204)
    • Provides a Swagger UI.
  • Infrastructure
    • All infrastructure components are independently testable adapters
    • It implements multi-tenancy for inbound HTTP requests (e.g., HTTP Host headers, URL keys, etc.)
    • It implements multi-tenancy (for data segregation) using either data partitioning, physical partitioning, or both.
    • It implements polyglot persistence, so you can use whatever persistence technology is appropriate for each module per data load (e.g., SQLServer, Postgres, Redis, DynamoDB, Amazon RDS, LocalFile, In-Memory, etc.)
    • It integrated 3rd party identity providers for authentication, 2FA, SSO, and credential management (e.g., Auth0, Microsoft Graph, Google, Amazon Cognito, etc.).
    • It integrates billing subscription management providers so that you can charge for your product use and determine feature sets based on subscription levels (e.g., Stripe, ChargeBee, Chargify, etc.).
    • It integrates feature flagging providers to control how to access your features and roll them out safely (e.g., LaunchDarkly, GitLab, Unleashed, etc.)
    • It integrates product usage metrics to monitor and measure the actual usage of your product (e.g., MixPanel, Google Analytics, Application Insights, Amazon XRay, etc.)
    • It integrates crash analytics and structured logging so you can plug in your own preferred monitoring (e.g., Application Insights, CloudWatch, Sentry.io, etc.).
    • It uses dependency injection extensively so that all modules and components remain testable and configurable.
    • It defines standard and unified configuration patterns (e.g., using appsettings.json) to load tenanted or non-tenanted runtime settings.
  • Application
    • Supports one or more applications, agnostic to infrastructure interop (i.e., allows you to expose each application as a REST API (default) or as a reliable Queue, or any other kind of infrastructure)
    • Supports transaction scripts + anemic domain model or Domain Driven Design
    • Applications are aligned to audiences and subdomains
  • Others
    • It provides documented code examples for the most common use cases. Simply follow and learn from the existing patterns in the codebase
    • It provides how-to guides for performing the most common things on a codebase like this, until you've learned the patterns.
    • It provides a decision log so you can see why certain design decisions were made.
    • It provides documentation about the design principles behind the codebase so you can learn about them and why they exist.
    • It [will] provide an eco-system/marketplace of common adapters that other people can build and share with the community.
    • It demonstrates extensive and overlapping testing suites (unit tests, integration tests, and end-to-end tests) to ensure that production support issues are minimized and regressions are caught early on. As well as allowing you to change any of the existing base code safely
    • It defines and enforces coding standards and formatting rules
    • It utilizes common patterns and abstractions around popular libraries (that are the most up-to-date in the .NET world), so you can switch them out for your preferences.
    • It defines horizontal layers and vertical slices to make changing code in any component easier and more reliable.
    • It enforces dependency direction rules so that layers and subdomains are not inadvertently coupled together (enforcing architectural constraints)

saastack's People

Contributors

jezzsantos avatar joshuavial avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

saastack's Issues

Basic Capabilities

See the Current Use Cases for a final list of all use cases in the product.

The list below is just the order in which we are planning the remaining work to be done, including remaining technology adapters.

Features:

(in this order)

  • REST API Design (w/ Minimal API + MediatR versus Controllers)
  • Testing patterns (Unit + Integration + EndToEnd)
  • HTTP request validation (w/ FluentValidation)
  • Modular plugins (API projects)
  • Async pattern top to bottom
  • JSON response formats (i.e. XML and JSON and possibility of others)
  • Default HTTP status codes (PUT/PATCH/POST/GET)
  • Search API support
  • Typed HttpClient (i.e. JsonClient that understands our request and response types)
  • Exception handling and Error handling (Result<TError> and Optional<T>) patterns
  • Roslyn analyzers for coding standards in Core assemblies
  • Request correlation. Naked incoming requests, versus chained API calls (ICallContext)
  • ServiceClients
  • Configuration
  • Basic DDD aggregates and ValueObjects
  • CQRS pattern
  • Persistence interfaces (IDataStore, IQueueStore, IEventStore, IBlobStore)
  • Persistence implementations (basic ones: InMem and LocalFile)
  • Ancillary messaging mechanisms (usages, audits, etc),
  • HostRecorder: Logging, Auditing, CrashReporting, Usage Metrics (IRecorder), using AI and queues
  • Delivery of emails to gateway from queue
  • CORS
  • AuthN integration (HMAC, JWT Transparent Token)
  • Roles and FeatureLevel authorization
  • Reverse Proxy for WebsiteHost, and Cookie authentication
  • SSO authentication example
  • Feature Flagging (Feature Toggles, Audiences, etc)
  • Multi-tenancy (EndUsers, Organisations, Memberships etc. main flows)
  • .NET 8.0 and improvements in DI
  • Roslyn rules for cross-domain dependencies, and layer dependencies. #10
  • Images and Avatars
  • Remaining API use cases and surface area (e.g. APIKey, Organization APIs)
  • Cleanup: (1) rename all context to caller (2) remaining usage and audit statements, (3) ???
  • SwaggerUI and API explorer
  • Images callback to UserProfile and Organization when deleted in ImagesAPI
  • Billing integration
  • Technology Adapters for persistence implementations:
    • Billing:
      • Chargebee
      • Chargify
    • Database Azure:
      • SQLServer
      • Azure Table Storage
    • Database AWS:
      • RDS-Postgres
      • DynamoDb
    • Usages:
      • UserPilot
      • MixPanel
    • Email:
      • MailGun
      • SendGrid
    • FeatureFlags:
      • Flagsmith
      • Unleash or LaunchDarkly
    • SSO:
      • Microsoft
      • Google
  • IAC assets. i.e. for deploying queues to AWS or Azure to round out the deployment puzzle.

Comming later:

  • External stubs
  • Tenant Configuration
  • Distributed System Interop - Event-Driven Architecture
  • Cloud Hosting (AWS or Azure or GC)
  • Adapters: Retry Policies, Circuit Breaker
  • Offloading Usage Metrics
  • Offloading Auditing

Billing integration

All SaaS products require some form of integration with a billing management system (like Chargebee, Maxio, etc), and often with a payments gateway (like Stripe). These billing management systems are not often also payment gateway, and should be treated separately.

These systems offer their own management portal to define customers, subscription, plans, etc, that various people in the business work with directly to set things up, set pricing plans etc. They also use them to apply extra discount, changes, etc. Some also allow manual reconciliation and taking payments directly.

Many products are themselves integrated directly with the billing management system, and query things like what subscription does the user have, how many users are paid for. They also allow certain users to change pricing plans, and quotas etc.

The job here in SaaStack is to provide a basic integration that does the most common things around billing, that is integrated with the product at the points where information needs to be exchanged between the SaaS platform product and the billing management system.

We need to document the use-cases, and establish a reasonably common starting point, so that the product can easily be changed to work with any variant of billing that the SaaS business wishes to implement as the business changes.

GTM

When we reach the point of this codebase being useful enough, it's time to go to market with it.

These ar the things we might do:

  • Notify some favorite dotnet influencers: Kzu, Nick Chapsas, Derek CoMartin, Amichai Mantinband, Milan Jovanović, Steve "ardalis" Smith etc

Improve Event Notification

Presently, we have a couple issues with the eventing pattern for notification.

Domain Events are meant to cross subdomains boundaries, but remain inside bounded contexts, whereas Integration events can cross bounded contexts into the universe.
We need to materalize that difference in the whole mechanism of IEventNotificationRegistration s.

Second problem is that we can only register one IEventNotificationRegistration in the call to service.RegisterUnTenantedEventing() that is done in each subdomain module (which is the source of the domain events.
Ultimately, we need to be able to register many IEventNotificationRegistration (one in each subdomain) and have the implementations of these "handlers" injected into the DI container by other subdomain modules (to avoid illegal references).

Third problem is that the EventNotificationNotifier is only handling a single IEventNotificationRegistration that it finds related to the Producer of the same type of the source aggregate that is publishing its events. We should be finding all IEventNotificationRegistrations

Roslyn Analyzers not running on Platform projects

Problem

Cannot use the current Tools.Platform.Analyzers in all the platform projects (IsPlatformProject==true).

Cannot build Common, because it depends on Tools.Platform.Analyzers
Cannot build Infrastructure.Web.Api.Interfaces, because it depends on Tools.Platform.Analyzers

Cannot build Tools.Platform.Analyzers because it depends on Common and on Infrastructure.Web.Api.Interfaces

We have a cyclic dependency.

Analysis

How to resolve?

Should we separate the MissingDocsAnalyzer to a separate assembly since that should only be running on
IsPlatfromProject && !IsTestingProject && !IsRoslynComponent

Wheras the other things (subdomain specific/shared) should be running on
!IsPlatfromProject && !IsTestingProject && !IsRoslynComponent

Move MultitenancyDetective to Middleware, and earlier in the pipeline

With Minimal APIs it is currently very difficult (at the moment) to obtain the instance of the request being used in the handler of the minimal API.
All of our generated APIs have the following signature (as can be seen as the output of the MinimalApiMediatRGenerator:

            apiGroup.MapGet("/health",
                async (IMediator mediator, [AsParameters] HealthCheckRequest request) =>
                     await mediator.Send(request, CancellationToken.None));

In ASPNET middleware, there seems to be no easy way to obtain the instance of the 2nd request parameter.

It is reasonably easy to get the metadata about the handler like this:

       var endpoint = httpContext.GetEndpoint();

        //TODO: get the second parameter of the endpoint request
        var method = endpoint.Metadata.GetMetadata<MethodInfo>();
        var args = method?.GetParameters();
        var requestDtoType = args?[1].ParameterType;
        if (requestDtoType.NotExists())
        {
            return false;
        }

        if (requestDtoType.IsAssignableTo(typeof(ITenantedRequest)))
        {
            //TODO: extract the ITenantedRequest.Organization from the request;
            tenantId = "atenantid";
            return true;
        }

But getting an instance of the actual request object is far harder to do.

The only clue we have is by following the code in the RequestDelegateFactory that creates an instance of a EndpointFilterInvocationContext by using the instance of the RequestDelegateFactoryContext derived from a bunch of things:

This code is used for the EndPointMiddleware to provide its sub-pipeline for any registered IEndpointFilter where it feeds each IEndpointFilter an instance of the EndpointFilterInvocationContext which gives access to the instance of the Arguments which is what we are after.

So this capability is currentlyonly available in a custom IEndpointFilter but not in middleware.

This means that we cannot implement middleware to do certain things that require the instance of the request.
What we can do is wire in a IEndpointFilter and get access to it, but this may mean that this IEndpointFilter comes too late in the request pipeline to make use of that information.

At present, ASPNET dictates that all IEndpointFilter come after the EndpointMideelware executes, which is the last piece of middleware, coming after any custom middleware we add. So this is potentially a problem.

For now, or until we can find a better way, the MultiTenancyFilter is how we will process tenanted requests to obtain, validate and perform multitenancy

Top Level - Must haves

Top level qualifying capabilities

  • Must be multi-tenanted from the get-go.
  • Must contain centralized resources (UserAccount, Membership, Organisation, Profile, etc) for B2C. But we need an easy way to switch that over to tenanted in the B2B scenario.
  • Must have a pluggable AuthN provider. Must have a default (credentials provider) to get started and then switch later (e.g. to Auth0)
  • Should use ASP.NET Minimal API + MediatR (instead of ServiceStack)
  • Must be async top to bottom by default, for any lower level IO-bound entry points (i.e. APIs, Queues etc)
  • EDA Notifications should be asynchronous by default (using a message broker)

Code Rules. Guidance and enforcement

These coding rules should be backed by Rosyln Analyzers and Code Fixes.

API Class

Location: any project with variable: <HasApi>true</HasApi>
Definition: Any instance method of an API class (instance class derived from IWebApiService)

Rules:

  1. Warning: Should return a Task<T> or T, where T is either ApiEmptyResult or ApiResult<TResource, TResponse> or ApiPostResult<TResource, TResponse>.
  2. Warning: must have at least one parameter, and first parameter must be IWebRequest<TResponse>, where TResponse is same type as in the return value. Second parameter can only be a CancellationToken
  3. Warning: method must be decorated with a WebApiRouteAttribute
  4. Warning: the route of all methods in this class should start with the same path
  5. Warning: should not have more than one method (in the same class) with the same TRequest

API project file (csproj)

Location: any project with variable: <HasApi>true</HasApi>
Definition: N/A

Rules:

  1. Error: Should either have a package reference to SaaStack.Tools.Generators.WebApi or a project reference to Tools.Generators.WebApi.csproj

Roslyn rules and restrictions

Two points:

  1. Does not seem that the rule (SAASAPP030) is firing in normal development when creating a new repository and before decorating it with IApplicationRepository
  2. The properties on ReadModels can of course not be either List or Dictionary as these cannot be natively serialized by any Store. Choose Json instead.

Machine buying a billing subscription

At present, when a machine is registered, it is registered without an email address.
But, a default organization is created for it, and so is a subscription.

The subscription buyer, is without an email address, or anyway to contact the machine.

Is this a supported scenrio? Should we let the machine be able to have a billing subscription?

swagger code gen id query parameter inconsistency

The generated swagger.json file is inconsistent with how the "id" parameter is showing up as query parameter in GET requests vs PATCH/PUT/DELETE requests which leads to tools like orval not functioning correctly when trying to auto create react-query hooks from the generated swagger.json.

Example for the swagger.json file created for ApiHost1. For PUT request to /cars/{Id}/maintain endpoint, the generated doc shows up like so:

    "/cars/{Id}/maintain": {
      "put": {
        "tags": [
          "CarsApi"
        ],
        "summary": "Schedules the car for maintenance for the specified period",
        "description": "(request type: ScheduleMaintenanceCarRequest)",
        "operationId": "ScheduleMaintenanceCar (Put)",
        "requestBody": {
          "content": {
            "application/json": {
              "schema": {
                "$ref": "#/components/schemas/ScheduleMaintenanceCarRequest"
              }
            }
          }
        }

Whereas for get requests, for example: /cars/{Id} endpoint, the generated swagger.json shows up like so:

"get": {
        "tags": [
          "CarsApi"
        ],
        "summary": "Fetches the specified car",
        "description": "(request type: GetCarRequest)",
        "operationId": "GetCar",
        "parameters": [
          {
            "name": "Id",
            "in": "path",
            "required": true,
            "style": "simple",
            "schema": {
              "type": "string"
            }
          },

Where the id parameter is explicitly defined as path parameter.

Due to this I am getting an error while using orval to create react-query clients:

The path params id can't be found in parameters (ScheduleMaintenanceCar (Put))
    at /home/m/repos/saastack-react/node_modules/@orval/core/dist/index.js:46067:13
    at Array.map (<anonymous>)
    at getParams (/home/m/repos/saastack-react/node_modules/@orval/core/dist/index.js:46058:17)

@jezzsantos I am not sure if this is an orval problem or saastack problem per se, so I am posting it here to get your opinion on it as well.

Assumptions Mapping

(Using David Bland's Guidance on assumption mapping, the goal is to identify the "leap of faith" assumptions we are making behind this idea)

Desirability:

  • We believe that the target customer for our solution is: CTOs of startups or tech leads of Service provider businesses.
  • We believe that our value proposition to our customers is: Saving time and money getting up and running right immediately, with a known starting point (template) and known way forward from there.
  • We believe that customers can or cannot solve this problem by: can start from scratch and will spend time and money getting to the same place with varying degrees of support for continuing forward.
  • We believe that we can reach our customers through: key developer influencers, and industry publications, to employees, to startups.
  • We believe that we can keep our customers by: producing an outstanding outcome for them, using already known ingredients

Feasibility:

  • We believe that we can address legal and regulatory risks by: open source and issuing restricted licenses
  • We believe that we can solve for the technology challenges by: already having that knowledge in-house, with future research to keep up with change.
  • We believe that we can hire and keep the right team because: we are experienced at doing that already, and this problem is desirable to solve for a whole class of competent software engineers looking to flex their skills.
  • We believe that we can sign key partners by: <unknown>
  • We believe that we are uniquely positioned to win because: we have the experience of doing this for small service providers.

Viability:

  • We believe that we can generate revenue by: selling and restricting a license of the software.
  • We believe that we can keep our costs low by: keeping the team small.
  • We believe that customers will pay a high enough price because: only if the value proposition is easily understood and obvious to them, whereas the alternative is far harder to predict.
  • We believe that we can make a profit because: there are enough service providers out there looking for a common stack to get started on, but only if we can reach them.
  • We believe that this aligns with our vision because: <unknown>

Record HTTP responses

We presently record (using IRecorder.Trace) all errors and exceptions coming from the API.

But:

  • We dont record the successful API call
  • We don't even record the normal errors (like 400-RuleViolation)

MutliTenancy Middleware returns 400 when call is not authenticated instead 401

Is there a reason why the MutliTenancy Middleware returns 400 "The ID of the organization is missing from this request" instead of 401 when the call is not authenticated?

Since we are saying that org id doesn't need to be passed in for requests on the default org, I think it would make more sense to return 401 when the call requiring multitenancy is not authenticated.

Currently in MultiTenancy Middleware, we have:

    private async Task<Result<string?, Error>> VerifyDefaultOrganizationIdForCallerAsync(ICallerContext caller,
        IEndUsersService endUsersService, List<Membership>? memberships, CancellationToken cancellationToken)
    {
        if (!caller.IsAuthenticated)
        {
            return Error.Validation(Resources.MultiTenancyMiddleware_MissingDefaultOrganization);
        }

        if (memberships.NotExists())
        {
            var retrievedMemberships = await GetMembershipsForCallerAsync(caller, endUsersService, cancellationToken);
            if (retrievedMemberships.IsFailure)
            {
                return retrievedMemberships.Error;
            }

            memberships = retrievedMemberships.Value;
        }

        var defaultOrganizationId = GetDefaultOrganizationId(memberships);
        if (defaultOrganizationId.HasValue())
        {
            return defaultOrganizationId;
        }

        return Error.Validation(Resources.MultiTenancyMiddleware_MissingDefaultOrganization);
    }

But unless I am missing something, I think it makes more sense to have it as:

    private async Task<Result<string?, Error>> VerifyDefaultOrganizationIdForCallerAsync(ICallerContext caller,
        IEndUsersService endUsersService, List<Membership>? memberships, CancellationToken cancellationToken)
    {
        if (!caller.IsAuthenticated)
        {
             return Error.NotAuthenticated(Resources.AuthenticationHandler_Failed);
        }
...

Why this change is needed?

Usually front end clients like react will be using Axios to inject a bearer token with every request, and will call refresh endpoint when the server returns 401. In this case, for an expired token, the server returns 400, instead of 401, so the client will have to rely on the specific error message to distinguish this 400 from other 400 type errors which I don't think is ideal.

I know the supported model for saastack is BEFFE architecture where the token is not directly sent from the front end client, but I think there is a high chance that startups might want to start with only one front end and one back end in order to deploy the front end client statically to a CDN and just have one back end, in which case they will have to do the token injection to Axios or fetch. So, because of the ease this would provide to getting a fresh token, unless I am missing some side effect, maybe we can return 401 there instead of 400?

Source Generated MediatR Handlers cannot be debugged

The MediatR handlers that are generated from the IWebApiService declarations are not debuggable at runtime (to do with how Source Generators work).

The code in the IWebApiService class is also not debuggable since it is not part of the execution pipeline.

This experience could be better for developers.

Solution

A better experience might be achieved if we did this:

  1. Source Generate the MediatR handlers, but instead of including the original code in them, we generate code that calls the original code.
  2. To do this, we would have to instantiate the IWebApiService class (in the handler) and call the appropriate method.
  3. To do that, we need to construct the instance of the class (in the handler) using the selected ctor, and local variables in the selected ctor.

This approach seems feasible because:

  • Each HTTP request would already need to create an instance of the IWebApiService class and inject dependencies. Otherwise, we would not be able to use Scoped dependencies
  • We have the variable names of all the injected dependencies of the IWebApiService class in the generator already.

Detecting multi-tenancy

We can only access the instance of the current request DTO from an IEndpointFilter (like the MultiTenancyFilter).
These endpoint filters execute after all the middleware has been executed (including any custom middleware).

Trying to access the instance of the request DTO outside the IEndpointFilter is ideal, but presently not feasible. Need more research.

We need the tenantId, at the time that we verify Authorization roles and features for specific organizations. This occurs during the Authentication/Authorization middleware, which occurs way up the request pipeline. At that time, we simply don't have access to the TenantId from the request DTO. Since it is set way further down the pipeline.

In order to perform Authorization fully, (with these constraints in place) we either:

  1. Find a way to access the request DTO outside the IEndpointFilter. Then, we can set the TenantId in a middleware before the Authentication/Authorization middleware runs, and we move from IEndpointFilter to Middleware.
  2. Do not verify that the user has a specific organization in the Authorization middleware; just match the role part, not the tenant part, as the tenantId will be matched later in the pipeline (by the MultiTenancyFilter).
  3. Other options?

Naming

The name should refer to the word stack

Defining "qualities" (apart from platform dotnet):

  • Modular Monolith
  • Clean Architecture (Ports and Adapters)
  • Domain Driven Design
  • Testable

CDMT - commandment stack
MOCTD -

User entity name

Traditionally, we have named the entity that represents an individual using the software as a User.
(In past implementations, we have used UserAccount).

In this implementation, we have the chance to diverge from that thinking.
The word Account could also represent the billing or buyer use cases.

The end-user is really the user of the software.
In B2C they are also the account holder or buyer. In B2B they are often not both the end-user and the buyer - but sometimes can be.

Since we are trying to define a model to include both B2C and B2B, perhaps we are better off making this distinction early on.

I want to propose the entity for an end-user to be called EndUser which represent their identity (ID) and associated use cases. This entity is referenced by other subdomains of course.

We still use Organization to represent the grouping entity (i.e. company in B2B, or team/project/workspace etc in B2C), but then this frees us to use the Account entity for the buyer use cases, their billing, subscription etc.

Role & FeatureLevel based authorization

We want the developer to be able to be declarative in coarse grained roles-based and feature-based authorization for endpoints.

At this stage, we are not designing for fine-grained permissions or policies.

We already define a small number of Authorization policies using net7.0 minimal API authorization policies but those policies are not easily extendable to be used in declarative ways.

All of our Roles and FeatureLevels are already very discrete and can be turned into enumerations (either in code directly or using source generators)

Once we have that and some declarative syntax to markup service operations (i.e., an extension to the RouteAttribute) or another mechanism, we can make the declarative syntax very easy.

One such approach is outlined here: https://www.linkedin.com/pulse/permission-based-authorization-aspnet-7-minimal-apis-yago-vicent/

Support enumerations in request objects

Lets say I have this definition of a request:

/// <summary>
///     Tests the use of enums in the request
/// </summary>
[Route("/testingonly/general/enum", OperationMethod.Post)]
public class PostWithEnumTestingOnlyRequest : IWebRequest<StringMessageTestingOnlyResponse>
{
    public TestEnum? AnEnum { get; set; }

    public string? AProperty { get; set; }
}

public enum TestEnum
{
    Value1,
    Value2,
    Value3
}

The serializer cannot handle the enum definition in the request.

However it can handle this request:

[Route("/testingonly/general/enum", OperationMethod.Post)]
public class PostWithEnumTestingOnlyRequest : IWebRequest<StringMessageTestingOnlyResponse>
{
    public string? AnEnum { get; set; }

    public string? AProperty { get; set; }
}

Email Delivery Reliability

Present Day

At the moment the EmailNotificationService packages up the request to deliver an email, and drops it on the "emails" queue. To be dealt with asynchronously.

An AzureFunction/Lambda is triggered and picks up the message, and sends it to the Ancillary API. The message is then sent to a 3rd party service (e.g. MailGun, SendGrid, etc.)
During this delivery step, delivery and network problems are dealt with with retries and backoffs etc (3 retries with exponential jittered backoff).
If the message fails delivery (including backoffs etc), the AzureFunction/Lambda will retry several times over the course of the next few minutes (5 times is the default).
If the message is not delivered (i.e. the API call does not return HTTP-200, then the AzureFunction/Lambda will place the message reliably on the poison queue. Alerts should be raised and a manual process must be deployed to resolve.

Problem

Email delivery is a business critical function, and even though we have a reliable asynchronous mechanism in place right now, there is little data tracking the whole process. It is possible that the queued message is lost in the process (i.e. deleted form the queue by an operator, or when limits are inadvertently reached). When this happens the system has no record of the email, and fixing it in a production support scenario, will be hard (not impossible) to detect or resolve.

It is possible to track the email from it inception, and through the synchronous process given its unique MessageId.

To do this, better, we would need to capture the following events:

  1. When the email was scheduled for delivery , before it appears on the queue.
  2. When the email was picked off the queue and an attempt was made to deliver it
  3. If and when an attempt to deliver it failed or not
  4. When the delivery succeeds
  5. Later, when we hear back (via webhook) from 3rd party the status of the email delivery, as it can still fail in the 3rd party (i.e. blocked email domains etc)

All these events should be captured in the backend API in the Ancillary domain.

Review control flow decision

We've already discussed that exceptions are okay to throw from deep within the code for exceptional reasons.

However, there is a class of errors that we do expect from API's. For example, "aggregate not found" which should be translated to a 404 -Not Found response, with an optional reason.

There are many more of these kinds of things, RuleViolation, RoleViolation are two others we could expect frequently.

The question is, what is a reasonable design improvement here (that is an improvement on raising exceptions, and is not to esoteric), for some kind of result type that we could bubble up from the Domain/Application layers that the API layer can translate into HTTP status codes?

We don't want to do this for the hell of it, and certainly not to name any claims about moving more functional.

Solutions

One common solution is to use a Result<TResult> type where you can either return the expected result (including no result) and also return either an exception, and/or some well defined error type.
The error type would have to be good enough to tell the whole story (e.g. code + reason). and we could define some pretty standard ones.

The exception could be there in case you want to raise one. But in general, you would just throw the exception, and the runtime would catch it at the API layer, and convert it to a 500 - Internal Server Error in all cases - since it is unexpected.

What could be another solution?

Problems to avoid

  • We dont want anything esoteric that is hard to grok for newcomers to it
  • We dont want to make the code hard to read or understand
  • We dont want to add far more overhead in testing than we would otherwise

Testing Azure functions

Our Integration.External tests for Azure functions simply dont work - they are broken becuase it is not yet known how to spin up the functions in a way that they are triggered by pushing messages onto the queues that they monitor.

The documentation for solving this specific problem does not exist.
We have an issue out there with the dotnet team, but nothing is moving in years:

We have this blog post, that suggests a different approach: https://jaliyaudagedara.blogspot.com/2024/03/creating-integration-tests-for-azure.html

Fix Integration.Website tests

As it turns out, due to various refactorings, we are starting up and shutting down the backend API every single test in the Integration.Website category, whereas we should probably only being doing that per class, rather than per test.

Download images by file extension

Presently, you can upload an image and its content-type is limited to GIF, JPEG, PNG, and that is calculated by the signature of the first few bytes.
We store the raw image and the content-type of the image and we use URLs like this (https://host.com/images/image_1234567890123456789012/download) to download the image.

Now, when downloading an image, you are expected to include an Accept header, and this is limited to either:

  1. image/png
  2. image/gif
  3. image/jpeg
  4. or no Accept header

Anything else is rejected.
The problem is that the caller that wants to download the image needs to know the content-type to include in the download request.

The actual content type is NOT knowledge they would readily have.

Solution

Instead of providing the Accept header, it would be better to just make the download request without any Accept header or use the value of */* for the Accept header.
To do this, we would be better to either:

  • We already deal with no accept header now, and that works fine.
  • deal with the Accept: */*
  • OR, use file extensions on the image URL (such as: https://host.com/images/image_1234567890123456789012.jpeg) that self-identifies the respective image content type, and then we do not need to know the Accept header

Support a Change Of Email API?

Should we allow the user to change their email address? (if using PasswordCredentials?).
It would be hard to forget password if we didn't allow that.

What happens if they are registered with their SSO email, and they change that behind the scenes?
How do we correlate the userId in our system with the new email address?

The change process is quite elaborate if we are following OWASP recommendations?
https://owasp.org/www-community/pages/controls/Changing_Registered_Email_Address_For_An_Account

Improvements after first deployment

  • SAASDDD049 should allow List and Dictionary<string, DTO>()
  • An IApplicationRepository interface should derive from IApplicationRepository
  • Make sure the SkipImmutabilityAtrribute has decent docs for first time users
  • Need a roslyn rule to make sure that the GetAtomicvalues() method of the SingleValueObjectBase class has the right number of parameters defined for it (based on the count of public getter methods)

Dependency Injection patterns need improving

We have some complex issues to resolve with DI.

We need to support physical data partitioning (out of the box), which means that it is possible that each tenant (each HTTP request) to be using separate credentials/connectionstrings/etc (or other secret config) on any of their store adapters (IDataStore, IQueueStore, IEventStore or IBlogStore) for any of the tenanted domains.

It also means that the "platform" subdomains (e.g. Organization, EndUser, Ancillary, etc) will require different instances of the IDataStore, IQueueStore, IEventStore or IBlogStore adapters since those will have a fixed set of credentials/connectionstrings/etc (separate from each tenants) int he adapter that will point to the same set of external systems.

Thus, we need certain implementations (e.g. the IDataStore adapter) to be in the IoC container twice: once for usage by Platform components, and once for tenanted (per HTTP request) components.

.NET 7.0 (and prior) does not support named instances, however, .NET 8.0 does support "keyed instances", which is exactly what we need to achieve this using keys. (one known "key" for the platform stuff, and no key for everything else). But it means that when dependencies are resolved, the component resolving them must use different API's to do it, and be explicit about it.

While we can use whatever workaround to achieve this same thing, developers need to understand the implications of these scopes and why they are different.

We need a clear abstraction that speaks to this and make it first-class.

Current Solution

We have started on creating extension methods on IServiceCollection and on IServiceProvider such as:
services.RegisterPlatform and container.ResolveForPlatform() but the current abstraction is not well thought out and descriptive enough.

The implementation is also a little janky, since it wraps the desired dependency in other type, so it can co-exist in the container. For example, the platform version of the IDataStore adapter would be in the container as a singleton IPlatformDependency<IDataStore> whereas the tenanted version is in the container as just IDataStore (either as a singleton or scoped).

Future Solutions

We could create an abstraction over the top of dotnet abstractions to make this clear, but the disadvantage is having to re-educate developers familiar with dotnet APIs already.

We need to find a balance between usability and descriptiveness so that developers are not using the wrong lifetime and scopes for their dependencies in each subdomain.

API Documentation not using XML Docs

At present we are defining a custom description, with expected errors in the XML docs of specific RequestDTO's, and then we are using the Source Generator to copy that data into the generated minimal API definitions, and including that data in the Swagger docs.

This works great when running the Rislyn source generator manually (in the Rider IDE).
But when the source generator is run as part of the MSBUILD build, the data is not available to the source generator.

We think this is because when the build occurs, the XML documentation files, even though they are present on the disk in the output directory of the assembly that needs to read them, the source generator is not reading them for some reason.

We don't know why the source generator is not loading them into its compilation, but the data is not there in the source generator when it runs.

We suspect that we have to write some code to load those files at compilation time.
We have a request to the Roslyn team asking how we do that: dotnet/roslyn#23673

Post without body

If you POST a request with no body, we get this exception (500).

We should probably return something more helpful than a 500.

07:43:55 fail: Microsoft.AspNetCore.Diagnostics.ExceptionHandlerMiddleware[1] An unhandled exception has occurred while executing the request. System.ArgumentNullException: Value cannot be null. (Parameter 'request')    at MediatR.Mediator.Send[TResponse](IRequest`1 request, CancellationToken cancellationToken)    at ApiHost1.MinimalApiRegistration.<>c.<<RegisterRoutes>b__0_14>d.MoveNext() in C:\Projects\github\jezzsantos\saastack\src\ApiHost1\Tools.Generators.Web.Api\Tools.Generators.Web.Api.MinimalApiMediatRGenerator\MinimalApiMediatRGeneratedHandlers.g.cs:line 126 --- End of stack trace from previous location ---    at Microsoft.AspNetCore.Http.RequestDelegateFactory.<TaskOfTToValueTaskOfObject>g__ExecuteAwaited|92_0[T](Task`1 task)    at Infrastructure.Web.Api.Common.Endpoints.ContentNegotiationFilter.InvokeAsync(EndpointFilterInvocationContext context, EndpointFilterDelegate next) in C:\Projects\github\jezzsantos\saastack\src\Infrastructure.Web.Api.Common\Endpoints\ContentNegotiationFilter.cs:line 24    at Infrastructure.Web.Api.Common.Endpoints.RequestCorrelationFilter.InvokeAsync(EndpointFilterInvocationContext context, EndpointFilterDelegate next) in C:\Projects\github\jezzsantos\saastack\src\Infrastructure.Web.Api.Common\Endpoints\RequestCorrelationFilter.cs:line 32    at Infrastructure.Web.Api.Common.Endpoints.ApiUsageFilter.InvokeAsync(EndpointFilterInvocationContext context, EndpointFilterDelegate next) in C:\Projects\github\jezzsantos\saastack\src\Infrastructure.Web.Api.Common\Endpoints\ApiUsageFilter.cs:line 67    at Microsoft.AspNetCore.Http.RequestDelegateFactory.<ExecuteValueTaskOfObject>g__ExecuteAwaited|129_0(ValueTask`1 valueTask, HttpContext httpContext, JsonTypeInfo`1 jsonTypeInfo)    at Microsoft.AspNetCore.Http.RequestDelegateFactory.<>c__DisplayClass102_2.<<HandleRequestBodyAndCompileRequestDelegateForJson>b__2>d.MoveNext() --- End of stack trace from previous location ---    at Infrastructure.Web.Hosting.Common.Extensions.WebApplicationExtensions.<>c.<<EnableEventingPropagation>b__4_1>d.MoveNext() in C:\Projects\github\jezzsantos\saastack\src\Infrastructure.Web.Hosting.Common\Extensions\WebApplicationExtensions.cs:line 155 --- End of stack trace from previous location ---    at Microsoft.AspNetCore.Authorization.AuthorizationMiddleware.Invoke(HttpContext context)    at Infrastructure.Web.Hosting.Common.Pipeline.MultiTenancyMiddleware.InvokeAsync(HttpContext context, ITenancyContext tenancyContext, ICallerContextFactory callerContextFactory, ITenantDetective tenantDetective, IEndUsersService endUsersService, IOrganizationsService organizationsService) in C:\Projects\github\jezzsantos\saastack\src\Infrastructure.Web.Hosting.Common\Pipeline\MultiTenancyMiddleware.cs:line 52    at Microsoft.AspNetCore.Authentication.AuthenticationMiddleware.Invoke(HttpContext context)    at Microsoft.AspNetCore.Diagnostics.ExceptionHandlerMiddlewareImpl.<Invoke>g__Awaited|10_0(ExceptionHandlerMiddlewareImpl middleware, HttpContext context, Task task)

How To Monetise?

When the code template is nearing its completed state, it will represent significant value to those who wish to use it.
Estimates (from past initatives), put the codebase at about 70,000 LOC. Which coarsely equates to about 700-7000 days of work (depending on your measure of production lines of code per day (10 LOC/day -100LOC/day). Conservatively, that's ~2 years of work, that you would have to spend achieving a similar outcome, if you started from scratch.

Caveat, albeit many of the available components, specifically some of the technology adapters included in this codebase, may never be used, and can be deducted from that estimate.

Thus, an appropriate financial contribution could be accepted by the buyer for the time saving to their business.

This contribution (in whatever form) would ideally be a one-off amount (rather than recurring), and it could also be per usage (i.e. per product used).

The code template, in its entire form, would need to remain available (open-sourced) to browse and examine by the buyer. This is believed to be necessary in order to feel confident judging its value, and in purchasing it.

It would also need to be licensed, (e.g. Unlicensed or MIT), such that ALL rights are transferrable to the buyer, to do as they please, with no limitations.

However, being fully available (to browse), presents a challenge in collecting contributions, since those obtaining it can easily do that without incurring any contribution back. There is no mechanism in this platform to help with that.

Option 1: Fully open source it

  1. We open-source the repo (make it public), and spend time and money on marketing it.
  2. We only ask for sponsors or for voluntary financial contributions

Expectations:

  • Interested people download and use it. Maybe it can get some popularity?
  • Few are likely to pay any financial contribution

Cons:

  • $0 revenue
  • Possibility of sponsors? unlikely
  • Only fans and the occasional hero would be engaged in making it better

Option 2: Make it a standard paid-for-product

  1. We keep the code hidden (private repo)
  2. We provide an open-source license to the codebase (MIT)
  3. We expose code snippets and the like, to demonstrate its patterns
  4. We create a landing page with demos, and videos and the like
  5. We license it per instance with a license to use on 1 software product only (something like ~USD$150 per instance)

Expectations:

  • $ revenue
  • Less uptake, as it si harder to see the value (before buying)
  • Probability of no sponsors?
  • Buyers now engaged in helping to improve it (since they paid for it), and will expect support

Cons:

  • Possibility of buyers then ripping it off and open-sourcing it themselves
  • No way to enforce repeated charges (for extra uses) we may only ever get one fee per buyer
  • They will require a private forum and service to support it moving forward

Option 3: Open source it with dual license

  1. We apply dual licenses to the repo (MIT + Other)
  2. We share all the source, except one small nuget component that is critically included in each runtime host.
  3. This component is closed-sourced and possibly (obfuscated) and published to nuget.
  4. This component can verify a license key that must be available to the runtime
  5. Code for the nuget is kept in a private repo.
  6. To get a license key, customers pay a one-time fee for it. Let's say USD150 (for argument's sake) for an eternal license for that person to use for that product (or for any number of projects), we could even have two fees (one per product, one fee for any number of products).

Expectations:

  • $ revenue
  • Risk of buyer hesitation because of open source principles and the license fee
  • Possibility of reverse engineering the nuget, and working around the license fee

Cons:

  • Possibility of buyers then ripping it off and open-sourcing it themselves
  • They will require a private forum and service to support it moving forward

Simulate and Document event versioning

We need to demonstrate (and document) how to version (IDomainEvent) events over thecourse of changes to the software.
And have some examples, and guidance for people to follow.

[AsParameters] on a RequestDelegate does not honor the [JsonPropertyNameAttribute] applied to the request object

We can define a request DTO like this:

[Route("/avatar/{Hash}", OperationMethod.Get)]
public class GravatarGetImageRequest : IWebRequestVoid
{
    [JsonPropertyName("d")] public string? Default { get; set; }

    public required string Hash { get; set; }

    [JsonPropertyName("s")] public int? Width { get; set; }
}

We can see that we intend that the Default property be rendered in the request JSON as a field with name "d".
When this request is sent, because it is a GET request, the value of Default will be converted to a query like this: /avatar/ahash?d=adefault

This is all well and good.

Now, in our web host, the handler is defined like this:

var stubgravatarapiGroup = app.MapGroup("/gravatar")
                .WithGroupName("StubGravatarApi")
                .RequireCors("__DefaultCorsPolicy")
                .AddEndpointFilter<ApiUsageFilter>()
                .AddEndpointFilter<RequestCorrelationFilter>()
                .AddEndpointFilter<ContentNegotiationFilter>();
            stubgravatarapiGroup.MapGet("/avatar/{Hash}",
                async (IMediator mediator, [Microsoft.AspNetCore.Http.AsParameters] GravatarGetImageRequest request) =>
                     await mediator.Send(request, CancellationToken.None));

Note the use of [AsParameters] on the request DTO object.

When our Host receives the request: /avatar/ahash?d=adefault we are expecting the Default property of the GravatarGetImageRequest request DTO to be populated by the value of the d parameter in the query, because of the existence of the [JsonPropertyName] attribute on that property.

But this is not the case.
Instead the `Default' parameter is unpopulated once the request is handled.

In fact the ASPNET runtime will throw an exception if the Default property is declared as required instead of nullable, since the value is not populated by the request pipeline.

So there is something in the ASPNET pipeline that is ignoring the [JsonPropertyName] attribute

Tagline

Building the hard things so you don't have to

GTM Plan

Items to consider:

  • A LinkedIn post announcing SaaStacjk and seeking feedback on the need for it.

  • A LinkedIn post with this caption: "We have chosen to use [Domain Driven Design](0040-modeling.md) as the tool for capturing and encoding our "conceptual models" of the world to align with the mental models our customers have, of the problems we are trying to solve for them."

others to come.

Reliability of Projections and Notifications

As a first pass we've implemented a "synchronous" and "unreliable" mechanism to relay change events to read models and to notifications. It works "consistently", which is convenient, but it also naive.

See InProcessSynchronousProjectionRelay and InProcessSynchronousNotificationRelay.

These implementations (deliberately) do not model the intermediary message broker abstractions, that need to be represented in the architecture. They skip past that notional component entirely and do it in-process.

Furthermore, there is nothing fault-tolerant about them. If they fail to relay any event to any listener, they fail at that point, but the save of the events/snapshot has already succeeded.

This is really an acceptable outcome since half of the listeners may have processed the event and half have not. There is no replay capability and the push mechanism is not keeping track of progress for each listener. Also, for snapshot schemes, the events themselves are unrecoverable form memory!

Solution

These implementations need to be replaced (in a second pass) with more reliable and "asynchronous" versions of relays that involve an explicit message broker abstraction that when implemented, guarantees these things:

  1. They are consistent and reliable with updates to the aggregate, in that, we assume that the second part of the process (relaying change events) to a reliable mechanism (i.e., a queue or bus) could fail for any number of reasons after the first part of the process (updating aggregate state succeeds). This cannot happen.
  2. Change events are always "published" in order to downstream consumers and can never be received out of order by consumers.
  3. Change events must be cached and indexed by consumer, since each consumer may be at a different index at any one time.
  4. Downstream consumers must have to deal with replays and be idempotent.
  5. Downstream consumers must handle fault tolerance to keep up-to-date (pull vs push)

We have also to remember that there will be 2 implementations since the source of these change events could originate from either a snapshotting persistence store, or from an event-sourced persistence store.

  • For the snapshotting scheme, it is likely that we need to engineer an "outbox pattern" implementation (using a "transaction" abstraction to be implemented by the specific IDataStore adapter). This needs to cache the recent events until they are handed to the message broker.
  • For the event-sourced scheme it is likely we need to engineer an XXX pattern, to get events from the store direct to the message broker.

Reference materials:

'required' properties in the Request DTO are throwing exceptions

A typical Reequest DTO might look like this:

namespace Infrastructure.Web.Api.Operations.Shared.Cars;

[Route("/cars", OperationMethod.Post, AccessType.Token)]
[Authorize(Roles.Tenant_Member, Features.Tenant_PaidTrial)]
public class RegisterCarRequest : TenantedRequest<GetCarResponse>
{
    public required string Jurisdiction { get; set; }

    public required string Make { get; set; }

    public required string Model { get; set; }

    public required string NumberPlate { get; set; }

    public required int Year { get; set; }
}

Notice the use of the required keyword, since this project (and all projects) have <Nullable>enable</Nullable>.

This works at the C# code level, but these DTO objects are populated by ASPNET at runtime due to minimal API registrations like this:

var carsapiGroup = app.MapGroup(string.Empty)
                .WithTags("CarsApi")
                .RequireCors("__DefaultCorsPolicy")
                .AddEndpointFilter<global::Infrastructure.Web.Api.Common.Endpoints.MultiTenancyFilter>()
                .AddEndpointFilter<global::Infrastructure.Web.Api.Common.Endpoints.ApiUsageFilter>()
                .AddEndpointFilter<global::Infrastructure.Web.Api.Common.Endpoints.RequestCorrelationFilter>()
                .AddEndpointFilter<global::Infrastructure.Web.Api.Common.Endpoints.ContentNegotiationFilter>();

.... other methods
            carsapiGroup.MapGet("/cars/{Id}",
                async (global::MediatR.IMediator mediator, [global::Microsoft.AspNetCore.Http.AsParameters] global::Infrastructure.Web.Api.Operations.Shared.Cars.GetCarRequest request) =>
                     await mediator.Send(request, global::System.Threading.CancellationToken.None))
                .RequireAuthorization("Token")
                .RequireCallerAuthorization("POLICY:{|Features|:{|Tenant|:[|basic_features|]},|Roles|:{|Tenant|:[|org_member|]}}")
                .WithOpenApi(op =>
                    {
                        op.OperationId = "GetCar";
                        op.Description = "(request type: GetCarRequest)";
                        op.Responses.Clear();
                        return op;
                    });

            carsapiGroup.MapPost("/cars",
                async (global::MediatR.IMediator mediator, global::Infrastructure.Web.Api.Operations.Shared.Cars.RegisterCarRequest request) =>
                     await mediator.Send(request, global::System.Threading.CancellationToken.None))
                .RequireAuthorization("Token")
                .RequireCallerAuthorization("POLICY:{|Features|:{|Tenant|:[|paidtrial_features|]},|Roles|:{|Tenant|:[|org_member|]}}")
                .WithOpenApi(op =>
                    {
                        op.OperationId = "RegisterCar";
                        op.Description = "(request type: RegisterCarRequest)";
                        op.Responses.Clear();
                        return op;
                    });

The problem comes when we submit either of these requests in any HTTP client tool, like this, with an invalid or incomplete request:

POST https://localhost:5001/cars
Accept: application/json
Content-Type: application/json

{
  "Make": ""
}

Which is clearly missing the required properties like: Make, Model and Year, then we should end up with a HTTP - 400 Validation, not an HTTP - 500!

But, instead, we get one of these HTTP - 500 responses, because APNET (internally) struggles to handle the required keyword when there is no data for that property.

In fact two different exceptions, depending on a couple of things:

This exception is from a GET request that we map to use the [AsParameters] on the request object, where the required property is missing from the URL:

Microsoft.AspNetCore.Http.BadHttpRequestException: Required parameter "string RequiredField" was not provided from query string.
   at Microsoft.AspNetCore.Http.RequestDelegateFactory.Log.RequiredParameterNotProvided(HttpContext httpContext, String parameterTypeName, String parameterName, String source, Boolean shouldThrow)
   at lambda_method274(Closure, Object, HttpContext)
   at Infrastructure.Web.Hosting.Common.Extensions.WebApplicationExtensions.<>c.<<EnableEventingPropagation>b__4_1>d.MoveNext() in C:\Projects\github\jezzsantos\saastack\src\Infrastructure.Web.Hosting.Common\Extensions\WebApplicationExtensions.cs:line 159
--- End of stack trace from previous location ---
   at Microsoft.AspNetCore.Authorization.AuthorizationMiddleware.Invoke(HttpContext context)
   at Infrastructure.Web.Hosting.Common.Pipeline.MultiTenancyMiddleware.InvokeAsync(HttpContext context, ITenancyContext tenancyContext, ICallerContextFactory callerContextFactory, ITenantDetective tenantDetective) in C:\Projects\github\jezzsantos\saastack\src\Infrastructure.Web.Hosting.Common\Pipeline\MultiTenancyMiddleware.cs:line 55
   at Microsoft.AspNetCore.Authentication.AuthenticationMiddleware.Invoke(HttpContext context)
   at Microsoft.AspNetCore.Diagnostics.ExceptionHandlerMiddlewareImpl.<Invoke>g__Awaited|10_0(ExceptionHandlerMiddlewareImpl middleware, HttpContext context, Task task)

see also: dotnet/aspnetcore#52881 (comment)

This exception is from POST request, where the required property is missing from the body of the request :

"type": "https://tools.ietf.org/html/rfc7231#section-6.6.1",
  "title": "An unexpected error occurred",
  "status": 500,
  "detail": "Failed to read parameter \"RegisterCarRequest request\" from the request body as JSON.",
  "instance": "https://localhost:5001/cars",
  "exception": "Microsoft.AspNetCore.Http.BadHttpRequestException: Failed to read parameter \"RegisterCarRequest request\" from the request body as JSON.\r\n ---> System.Text.Json.JsonException: JSON deserialization for type 'Infrastructure.Web.Api.Operations.Shared.Cars.RegisterCarRequest' was missing required properties, including the following: jurisdiction, model, numberPlate, year\r\n   at System.Text.Json.ThrowHelper.ThrowJsonException_JsonRequiredPropertyMissing(JsonTypeInfo parent, BitArray requiredPropertiesSet)\r\n   at System.Text.Json.Serialization.Converters.ObjectDefaultConverter`1.OnTryRead(Utf8JsonReader& reader, Type typeToConvert, JsonSerializerOptions options, ReadStack& state, T& value)\r\n   at System.Text.Json.Serialization.JsonConverter`1.TryRead(Utf8JsonReader& reader, Type typeToConvert, JsonSerializerOptions options, ReadStack& state, T& value, Boolean& isPopulatedValue)\r\n   at System.Text.Json.Serialization.JsonConverter`1.ReadCore(Utf8JsonReader& reader, JsonSerializerOptions options, ReadStack& state)\r\n   at System.Text.Json.Serialization.Metadata.JsonTypeInfo`1.ContinueDeserialize(ReadBufferState& bufferState, JsonReaderState& jsonReaderState, ReadStack& readStack)\r\n   at System.Text.Json.Serialization.Metadata.JsonTypeInfo`1.DeserializeAsync(Stream utf8Json, CancellationToken cancellationToken)\r\n   at System.Text.Json.Serialization.Metadata.JsonTypeInfo`1.DeserializeAsObjectAsync(Stream utf8Json, CancellationToken cancellationToken)\r\n   at Microsoft.AspNetCore.Http.HttpRequestJsonExtensions.ReadFromJsonAsync(HttpRequest request, JsonTypeInfo jsonTypeInfo, CancellationToken cancellationToken)\r\n   at Microsoft.AspNetCore.Http.HttpRequestJsonExtensions.ReadFromJsonAsync(HttpRequest request, JsonTypeInfo jsonTypeInfo, CancellationToken cancellationToken)\r\n   at Microsoft.AspNetCore.Http.RequestDelegateFactory.<HandleRequestBodyAndCompileRequestDelegateForJson>g__TryReadBodyAsync|102_0(HttpContext httpContext, Type bodyType, String parameterTypeName, String parameterName, Boolean allowEmptyRequestBody, Boolean throwOnBadRequest, JsonTypeInfo jsonTypeInfo)\r\n   --- End of inner exception stack trace ---\r\n   at Microsoft.AspNetCore.Http.RequestDelegateFactory.Log.InvalidJsonRequestBody(HttpContext httpContext, String parameterTypeName, String parameterName, Exception exception, Boolean shouldThrow)\r\n   at Microsoft.AspNetCore.Http.RequestDelegateFactory.<HandleRequestBodyAndCompileRequestDelegateForJson>g__TryReadBodyAsync|102_0(HttpContext httpContext, Type bodyType, String parameterTypeName, String parameterName, Boolean allowEmptyRequestBody, Boolean throwOnBadRequest, JsonTypeInfo jsonTypeInfo)\r\n   at Microsoft.AspNetCore.Http.RequestDelegateFactory.<>c__DisplayClass102_2.<<HandleRequestBodyAndCompileRequestDelegateForJson>b__2>d.MoveNext()\r\n--- End of stack trace from previous location ---\r\n   at Infrastructure.Web.Hosting.Common.Extensions.WebApplicationExtensions.<>c.<<EnableEventingPropagation>b__4_1>d.MoveNext() in C:\\Projects\\github\\jezzsantos\\saastack\\src\\Infrastructure.Web.Hosting.Common\\Extensions\\WebApplicationExtensions.cs:line 159\r\n--- End of stack trace from previous location ---\r\n   at Microsoft.AspNetCore.Authorization.AuthorizationMiddleware.Invoke(HttpContext context)\r\n   at Infrastructure.Web.Hosting.Common.Pipeline.MultiTenancyMiddleware.InvokeAsync(HttpContext context, ITenancyContext tenancyContext, ICallerContextFactory callerContextFactory, ITenantDetective tenantDetective) in C:\\Projects\\github\\jezzsantos\\saastack\\src\\Infrastructure.Web.Hosting.Common\\Pipeline\\MultiTenancyMiddleware.cs:line 55\r\n   at Microsoft.AspNetCore.Authentication.AuthenticationMiddleware.Invoke(HttpContext context)\r\n   at Microsoft.AspNetCore.Diagnostics.ExceptionHandlerMiddlewareImpl.<Invoke>g__Awaited|10_0(ExceptionHandlerMiddlewareImpl middleware, HttpContext context, Task task)"
}

see also this: https://learn.microsoft.com/en-us/dotnet/standard/serialization/system-text-json/required-properties
as to why this is being honored

Solution

We need to resolve this, one way or the other, since an inbound HTTP may be missing any data, and we want the appropriate HTTP response in all these cases:

  1. GET /resource/{Id} when the Id property is null/missing from the request -> HTTP 404 NotFound (since no effective route can be matched)
  2. GET /resource/{Id} when another property (i.e. Make) is missing from the request -> HTTP 400 BadRequest (since the validator kicks in)
  3. POST /resource/{Id} when the Id property is null/missing from the request -> HTTP 404 NotFound (since no effective route can be matched)
  4. POST /resource/{Id} when another property (i.e. Make) is missing from the request -> HTTP 400 BadRequest (since the validator kicks in)

At present, the only workable solution is this:

  1. Forbid the usage of the required keyword in all Request DTOs, and turn off nullability (i.e. <nullable>disabled</nullable> or use #pragma CS8618, either for each class or for the whole assembly).
  2. Also, for all GET requests, we would also need to make all parameters in the request DTO be string? to bypass the issues with the [AsParameters] exception (since GET requests do not support bodies).

In either GET or POST requests parameters like Id that are used in the route path can be declared as string or string? it makes no difference. However, in all GET, DELETE requests, all properties of the requestDTO must be string?.

Billing Integration - limits and quotas

At the moment, we have no abstraction and no code enforcing limits and counting quotas (feature use) in the product.

It would be good to add that to the billing abstraction, to make it easy (by example) to adopt.

Background Processing

We are adding asynchronous background processing mechanisms to handle certain workloads (i.e. usages and audits and notifications).

TODO:

  • Establish Ancillary API
  • Establish Workers
  • Establish Azure Functions and Aws Lambdas to trigger from their respective queues
  • Deliver messages to queue

Add Transactions to database implementation of IDataStore, IBlobStore and IEventStore.

Problem

At the moment, no matter what the actual technology provider, IDataStore implementations are open to concurrency issues when we call any of these methods in SnapshottingDddCommandStore<TAggregateOrEntity>;

  • DeleteAsync()
  • ResurrectDeletedAsync() or
  • UpsertAsync()
    We first fetch the record via IDataStore.RetrieveAsync() and then either:
  • RemoveAsync()
  • AddAsync() or,
  • ReplaceAsync()

The same problem exists with SnapshottingStore<TDto> implementations.

The same problem would exist in IBlobStore implementations.

We have the same kind of problem in a slightly different way for IEventStore in EventSourcingDddCommandStore<TAggregateRoot> between calling LoadAsync() and SaveAsync() if using a database implementation.

If we are doing this in a multi-threaded environment, which we are with HTTP requests, then there is an outside chance that in a busy SaaS system, we will experience concurrency issues, particularly more frequently with events sourced aggregates, using EventSourcingDddCommandStore<TAggregateRoot>

Solution

We can support a notion of a logical transaction, where the *Store classes above, create a transaction and pass that transaction to successive calls in the *Store methods, but also allow the developer to create and pass down transactions from their code into the *Store implementations.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.