Git Product home page Git Product logo

saas-metering-system-on-aws's Introduction

SaaS Metering system on AWS demo project!

This repository provides you cdk scripts and sample codes on how to implement a simple SaaS metering system.

Below diagram shows what we are implementing.

saas-metering-arch

The cdk.json file tells the CDK Toolkit how to execute your app.

This project is set up like a standard Python project. The initialization process also creates a virtualenv within this project, stored under the .venv directory. To create the virtualenv it assumes that there is a python3 (or python for Windows) executable in your path with access to the venv package. If for any reason the automatic creation of the virtualenv fails, you can create the virtualenv manually.

To manually create a virtualenv on MacOS and Linux:

$ python3 -m venv .venv

After the init process completes and the virtualenv is created, you can use the following step to activate your virtualenv.

$ source .venv/bin/activate

If you are a Windows platform, you would activate the virtualenv like this:

% .venv\Scripts\activate.bat

Once the virtualenv is activated, you can install the required dependencies.

(.venv) $ pip install -r requirements.txt

Deploy

At this point you can now synthesize the CloudFormation template for this code.

(.venv) $ export CDK_DEFAULT_ACCOUNT=$(aws sts get-caller-identity --query Account --output text)
(.venv) $ export CDK_DEFAULT_REGION=$(curl -s 169.254.169.254/latest/dynamic/instance-identity/document | jq -r .region)
(.venv) $ cdk synth --all

Use cdk deploy command to create the stack shown above.

(.venv) $ cdk deploy --require-approval never --all

To add additional dependencies, for example other CDK libraries, just add them to your setup.py file and rerun the pip install -r requirements.txt command.

Run Test

  1. Register a Cognito User, using the aws cli

    aws cognito-idp sign-up \
      --client-id your-user-pool-client-id \
      --username "[email protected]" \
      --password "user-password"
    

    Note: You can find UserPoolClientId with the following command:

    aws cloudformation describe-stacks --stack-name your-cloudformation-stack-name | jq -r '.Stacks[0].Outputs | map(select(.OutputKey == "UserPoolClientId")) | .[0].OutputValue'
    
  2. Confirm the user, so they can log in:

    aws cognito-idp admin-confirm-sign-up \
      --user-pool-id your-user-pool-id \
      --username "[email protected]"
    

    At this point if you look at your cognito user pool, you would see that the user is confirmed and ready to log in: amazon-cognito-user-pool-users

    Note: You can find UserPoolId with the following command:

    aws cloudformation describe-stacks --stack-name your-cloudformation-stack-name | jq -r '.Stacks[0].Outputs | map(select(.OutputKey == "UserPoolId")) | .[0].OutputValue'
    
  3. Log the user in to get an identity JWT token

    aws cognito-idp initiate-auth \
      --auth-flow USER_PASSWORD_AUTH \
      --auth-parameters USERNAME="[email protected]",PASSWORD="user-password" \
      --client-id your-user-pool-client-id
    
  4. Invoke REST API method

    $ MY_ID_TOKEN=$(aws cognito-idp initiate-auth --auth-flow USER_PASSWORD_AUTH --auth-parameters USERNAME="[email protected]",PASSWORD="user-password" --client-id your-user-pool-client-id | jq -r '.AuthenticationResult.IdToken')
    $ curl -X GET 'https://{your-api-gateway-id}.execute-api.{region}.amazonaws.com/prod/random/strings?len=7' --header "Authorization: ${MY_ID_TOKEN}"
    

    The response is:

    ["weBJDKv"]
    
  5. Generate test requests and run them.

    $ source .venv/bin/activate
    (.venv) $ pip install "requests==2.28.1"
    (.venv) $ python tests/run_test.py --execution-id {your-api-gateway-execution-id} \
                                       --region-name {region} \
                                       --auth-token ${MY_ID_TOKEN} \
                                       --max-count 10
    
  6. Check the access logs in S3

    After 5~10 minutes, you can see that the access logs have been delivered by Kinesis Data Firehose to S3 and stored in a folder structure by year, month, day, and hour.

    amazon-apigatewy-access-log-in-s3

  7. Creating and loading a table with partitioned data in Amazon Athena

    Go to Athena on the AWS Management console.

    • (step 1) Create a database

      In order to create a new database called mydatabase, enter the following statement in the Athena query editor and click the Run button to execute the query.

      CREATE DATABASE IF NOT EXISTS mydatabase
      
    • (step 2) Create a table

      Copy the following query into the Athena query editor, replace the xxxxxxx in the last line under LOCATION with the string of your S3 bucket, and execute the query to create a new table.

      CREATE EXTERNAL TABLE mydatabase.restapi_access_log_json (
        `requestId` string,
        `ip` string,
        `user` string,
        `requestTime` timestamp,
        `httpMethod` string,
        `resourcePath` string,
        `status` string,
        `protocol` string,
        `responseLength` integer)
      PARTITIONED BY (
        `year` int,
        `month` int,
        `day` int,
        `hour` int)
      ROW FORMAT SERDE
        'org.openx.data.jsonserde.JsonSerDe'
      STORED AS INPUTFORMAT
        'org.apache.hadoop.mapred.TextInputFormat'
      OUTPUTFORMAT
        'org.apache.hadoop.hive.ql.io.IgnoreKeyTextOutputFormat'
      LOCATION
        's3://apigw-access-log-to-firehose-xxxxx/json-data'
      

      If the query is successful, a table named restapi_access_log_json is created and displayed on the left panel under the Tables section.

      If you get an error, check if (a) you have updated the LOCATION to the correct S3 bucket name, (b) you have mydatabase selected under the Database dropdown, and (c) you have AwsDataCatalog selected as the Data source.

    • (step 3) Load the partition data

      Run the following query to load the partition data.

      MSCK REPAIR TABLE mydatabase.restapi_access_log_json;
      

      After you run this command, the data is ready for querying.

      Instead of MSCK REPAIR TABLE command, you can use the ALTER TABLE ADD PARTITION command to add each partition manually.

      For example, to load the data in

      s3://apigw-access-log-to-firehose-xxxxx/json-data/year=2023/month=01/day=10/hour=06/
      you can run the following query.

      ALTER TABLE mydatabase.restapi_access_log_json ADD IF NOT EXISTS
      PARTITION (year=2023, month=1, day=10, hour=6)
      LOCATION 's3://apigw-access-log-to-firehose-xxxxx/json-data/year=2023/month=01/day=10/hour=06/';
      
    • (Optional) (step 4) Check partitions

      Run the following query to list all the partitions in an Athena table in unsorted order.

      SHOW PARTITIONS mydatabase.restapi_access_log_json;
      
  8. Run test query

    Enter the following SQL statement and execute the query.

    SELECT COUNT(*)
    FROM mydatabase.restapi_access_log_json;
    
  9. Merge small files into large one

    When real-time incoming data is stored in S3 using Kinesis Data Firehose, files with small data size are created.
    To improve the query performance of Amazon Athena, it is recommended to combine small files into one large file.
    Also, it is better to use columnar dataformat (e.g., Parquet, ORC, AVRO, etc) instead of JSON in Amazon Athena.
    To run these tasks periodically, the AWS Lambda function function that executes Athena's Create Table As Select (CTAS) query has been deployed.
    Now we create an Athena table to query for large files that are created by periodical merge files task.

    CREATE EXTERNAL TABLE mydatabase.restapi_access_log_parquet (
      `requestId` string,
      `ip` string,
      `user` string,
      `requestTime` timestamp,
      `httpMethod` string,
      `resourcePath` string,
      `status` string,
      `protocol` string,
      `responseLength` integer)
    PARTITIONED BY (
     `year` int,
     `month` int,
     `day` int,
     `hour` int)
    ROW FORMAT SERDE
     'org.apache.hadoop.hive.ql.io.parquet.serde.ParquetHiveSerDe'
    STORED AS INPUTFORMAT
     'org.apache.hadoop.hive.ql.io.parquet.MapredParquetInputFormat'
    OUTPUTFORMAT
     'org.apache.hadoop.hive.ql.io.parquet.MapredParquetOutputFormat'
    LOCATION
     's3://apigw-access-log-to-firehose-xxxxx/parquet-data'
    

    After creating the table and once merge files task is completed, the data is ready for querying.

Clean Up

Delete the CloudFormation stack by running the below command.

(.venv) $ cdk destroy --force --all

Useful commands

  • cdk ls list all stacks in the app
  • cdk synth emits the synthesized CloudFormation template
  • cdk deploy deploy this stack to your default AWS account/region
  • cdk diff compare deployed stack with current state
  • cdk docs open CDK documentation

Enjoy!

References

Security

See CONTRIBUTING for more information.

License

This library is licensed under the MIT-0 License. See the LICENSE file.

saas-metering-system-on-aws's People

Contributors

ksmin23 avatar amazon-auto avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.