Git Product home page Git Product logo

qujata's Introduction

Table of Contents

Overview

In recent years, there has been a substantial amount of research on quantum computers – machines that exploit quantum mechanical phenomena to solve mathematical problems that are difficult or intractable for conventional computers. If large-scale quantum computers are ever built, they will be able to break many of the public-key cryptosystems currently in use. This would seriously compromise the confidentiality and integrity of digital communications on the Internet and elsewhere. The goal of post-quantum cryptography (also called quantum-resistant cryptography) is to develop cryptographic systems that are secure against both quantum and classical computers, and can interoperate with existing communications protocols and networks.

Qujata project (after myth creature Kujata or Quyata) is a testbed, evaluating the performance of the supported Quantum-Safe Crypto protocols by their client & server vital signs, like memory and cpu usage, connection time, download speed, once connection is established, etc.

Algorithms

Algorithms supported

This section lists all quantum-safe algorithms supported by this oqs testbed.

As standardization for these algorithms within TLS is not done, all TLS code points/IDs can be changed from their default values to values set by environment variables. This facilitates interoperability testing with TLS1.3 implementations that use different IDs.

Algorithm name Enabled Type
bikel1 Yes Post Quantum
bikel3 Yes Post Quantum
bikel5 Yes Post Quantum
frodo1344aes Yes Post Quantum
frodo1344shake Yes Post Quantum
frodo640aes Yes Post Quantum
frodo640shake Yes Post Quantum
frodo976aes Yes Post Quantum
frodo976shake Yes Post Quantum
hqc128 Yes Post Quantum
hqc192 Yes Post Quantum
hqc256 Yes Post Quantum
kyber1024 Yes Post Quantum
kyber512 Yes Post Quantum
kyber768 Yes Post Quantum
p256_kyber512 Yes Hybrid
p384_kyber768 Yes Hybrid
prime256v1 Yes Classic
secp384r1 Yes Classic
x25519_kyber768 Yes Hybrid

Getting Started

We suggest using the Docker Compose distribution, but a Kubernetes Helm charts procedure is avalable if you’d prefer to run the Qujata in your kubernetes environment.

also a Development Installation procedure to run the Qujata in development mode is provided.

To start, clone the qujata repository:

git clone https://github.com/att/qujata.git
cd qujata

There are two ways to install qujata runtime on your machine

  1. Individual Dockers
  2. Within a pre-built Kubernetes setup

These two options are detailed below.

Option 1: Docker

Prerequisit: Docker, Docker Compose.
Docker Compose is included in Docker Desktop installation

  1. cd to the following directory:
cd run/docker
  1. Start the application using:
docker compose up
  1. the UI is available on:
http://localhost:2000/qujata
  1. The grafana UI is now available by clicking on the button in the UI or using the below url and selecting 'Qujata Analysis' dashboard:
http://localhost:3000/

The initial username/password for grafana is qujata/qujata.

Option 2: Kubernetes

Prerequisit: Kubernetes, Helm
If you're using Docker Desktop you can Enable Kuberenets in Docker Desktop

  1. cd to the following directory:
cd run/kubernetes
  1. install helm charts:
helm dependency update
helm install qujata . --create-namespace --namespace qujata
  1. expose ports (creates 3 background processes):
kubectl port-forward service/qujata-grafana 3000:3000 -n qujata & \
kubectl port-forward service/qujata-portal 2000:80 -n qujata & \
kubectl port-forward service/qujata-api 3020:3020 -n qujata &

NOTE: Please note port-forward command does not return. It will forward the port(s) until CTRL+C is pressed, see this page for more details. If background process was used (& and the end of each bash command, like we suggested above), you will need to use fg command 3 times to bring the services back to forground and CTRL+C on each to stop forwarding or simply use this command to kill all port forwarding at once:

pkill -f "port-forward"

To check if the right ports are indeed forwarded, open a new bash/terminal window and try the following command:

ps -f | grep 'kubectl' | grep 'port-forward' | awk '{print $10 " " $11}'
  1. the UI is available on:
http://localhost:2000/qujata
  1. The grafana UI is now available by clicking on the button in the UI or using the below url and selecting 'Qujata Analysis' dashboard:
http://localhost:3000/

The initial username/password for grafana is qujata/qujata.

Development

As opposed to installing qujata runtime, explained in the Getting Started section, developing and/or contributing code to the project requires installation of all the compounders listed below on your machine.

In order to install and run the various components in development mode, see these individual read me files for each of them.

  1. Portal
  2. Api
  3. Curl

For UI development contributors, please find below our current UI/UX design. Suggestions for improvement are always welcome. Qujata_Wireframes.pdf

Project Roadmap and Architecture

Information about our roadmap can be found here.

Contributing

Information about how to contribute, can be found here.

Releases

See here.

Acknowledgements

Qujata project is based on Open Quantum Safe project and other work done by NIST and other organization and individuals, working on PQC (Post Quantum Safe) cryptography algorithms

Code of Conduct

Code of conduct can be found here

License

License can be found here.

qujata's People

Contributors

adibar121 avatar dkhnn avatar iadibar avatar kaylarizi avatar litalmason avatar mikekcs avatar milaw avatar nganani avatar ohadkoren avatar yeudit avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

qujata's Issues

Export experiment results

Description

Allow exporting Test Suite results, with all its 'Test-Runs', to a file.

  • Export results to JSON and/or CSV files to allow further analysis
  • Optional: Export results to PDF (Table, Graphs) for management reports, etc.

JSON example:

{ 
   "id": 1, 
   "name": "test1", 
   "description": "test1", 
   "start_time": "Thu, 14 Dec 2023 15:37:31 GMT", 
   "end_time": "Thu, 14 Dec 2023 15:38:39 GMT", 
   "environment_info": { 
     "codeRelease": "1.1.0", 
     "cpu": "RELACE_WITH_CPU", 
     "cpuArchitecture": "RELACE_WITH_CPU_ARCHITECTURE", 
     "cpuClockSpeed": "RELACE_WITH_CLOCK_SPEED", 
     "cpuCores": 0, 
     "nodeSize": "RELACE_WITH_NODE_SIZE", 
     "operatingSystem": "RELACE_WITH_OPERATING_SYSTEM", 
     "resourceName": "RELACE_WITH_RESOURCE_NAME" 
   }, 
  "testRuns": [ 
    { 
      "algorithm": "Algorithm1", 
      "iterations": 1000, 
      "messageSizeBytes": 1024, 
      "results":  
        { 
      "averageCPU": 25.5, 
      "averageMemory": 512, 
      "errorRate": 0.05, 
      "bytesThroughput": 2048000, 
      "messagesThroughput": 500, 
      "averageTLSHandshakeTime": 10.2 
	} 
    } 
   ... 
  ] 

Acceptance Criteria

  1. Export button in Qujata Portal
  2. Export to CSV file
  3. Export results as JSON file (optional for 1.1.0 milestone)
  4. Export results and graphs to PDF, in a similar format to their presentation on the UI, as a ready made report you can immediately share (nice to have for 1.1.0 milestone)
  5. Testing: ensure all fields are exported to CSV (native format is JSON), Titles are written in plain English, no issues with larger result files

Tasks

  • Saving Test Run results in db
  • Getting Test Run results from cAdvisor
  • New API to export data by test_suite_id (DB & Prometheus)
  • Query prometheus (CPU and memory, client and server) according to test run start/end time and aggregate to avg result
  • UI changes: add button to export data according to design
  • Add get API for test suites and test runs (one Test Suite includes 1 or more Test Runs)

Data analysis (powered by genAI)

Description

The idea is to analyze our reports and to provide coherent insights on the data that we collected.
This is one of the main goals of this project.

Figma
Insights section "View more"
image
Insights section "View less"
image

Acceptance Criteria

  1. Add insights section to our "View experiment page".
  2. Examine PQC affect on our metrics: Check correlation between message size, number of iterations and PQC algorithms.
    We would like to use genAI to help with this task:
    Run a POC with GenAI to analyze the raw data and provide insights (prompt engineering).
  3. Populate our insights to each of our official benchmarking report.

Research questions to address with GenAI prompt

Analysis should be able to answer the following questions for our users and ourselves:

  1. What is the CPU / Memory usage, error rate, bytes throughput, requests count throughput, TLS handshake time between different algorithms (PQ / Hybrid / Classic)?
  2. Can we see an exponential rise or anomalies in PQ/Hybrid vs classic algorithms when increasing the number of iterations?
  3. Can we see an exponential rise or anomalies in PQ/Hybrid vs classic algorithms when increasing the message size?
  4. Can we see a substantial effect of PQ/Hybrid algorithms on the metrics that we selected for examination?

Tasks

  • Add new field to our experiment (AKA test suite) JSON called insights.
  • UI - Conditional section: Display the insights in View experiment page if the insights property is populated.
  • POC - GenAI analyzing the JSON results - compared to manual analysis, per previous tasks
  • Create a prompt to analyze the experiment results JSON (See latest prompt in comments)
  • Manually add the results to our official benchmarking reports under insights property. Make sure to review the genAI generated insights and modify as needed.

Out of scope: automate the genAI insights generation from an Azure instance of the latest gpt model after a run is executed.

New view to the app

Feature Summary

Feature Summary

Project Issue

I had some time to think about the execution of the project, as a side observer. As a Frontend developer without permissions to the Figma file, I developed an interesting way to interact with the server.

Approach

  1. Lean and Temporary Server:
    • Initiated a lean and temporary server with the app, excluding the portal.
    • Deployed a UI that centers the whole pack into one view.

Main Changes

  1. Past Experiments Exposure:

    • Past experiments are exposed from the beginning, allowing users to explore them.
  2. Export Features:

    • Export button to get all of the past and current experiments.
    • Export button to download the graph/chart of a specific test.
  3. New Test Initiation:

    • "New" button always on top for easy test initiation.
  4. Running Test Tracing:

    • Tracing for running tests, providing visual and textual visualization on the fly.
  5. JSON View:

    • Interactable rows of data in JSON view for deeper exploration.
  6. Unified Graph:

    • Unified graph for a simpler view.
  7. Data Normalization:

    • Normalization of the data as Log function for an enhanced view of two vectors.
  8. Dedicated Domain:

Purpose of the Feature

I know it is not one of the predicted steps that have been declared in the project, but it is worth sharing with you so we can pivot or jump-start ideas worth exploring.

Additional Information

qujata-screenshot

Metrics collection - Bytes throughput, Request count throughput

Overview

Implement bytes throughput and request count throughput metrics in a single test run of an algorithm and its selected amount of iterations.

After calculating the new metrics, you will add them as two new fields to our results JSON, requestThroughput and bytesThroughput, to the existing JSON data structure. These fields will represent the per-second throughput of algorithm executions and data processed respectively.
These new metrics shall also be visualized in the UI.

Current JSON Structure

The current JSON data structure appears as follows:

{
   "algorithm": "hqc256",
   "id": 189,
   "iterations": 10000,
   "results": {
      "averageCPU": 3.1,
      "averageMemory": 476
   }
}

Expected JSON Structure

The proposed JSON structure with the new fields is as follows:

{
   "algorithm": "hqc256",
   "id": 189,
   "iterations": 10000,
   "results": {
      "averageCPU": 3.1,
      "averageMemory": 476,
      "requestThroughput": 100,
      "bytesThroughput": 520
   }
}

Implementation Steps

  1. Data Collection: Per run, we need the number of iterations, the total message size (=iterations * message size), and the total time taken for the test run.

  2. Message Size Calculation: Calculate the size of an HTTP request by adding the size of the request line, headers, and the payload (for POST requests). The size of the request line includes the HTTP method, URL, and HTTP version. Headers include fields like Host, User-Agent, Accept, and others. The payload is the data being sent to the server (for POST requests).

Python snippet:

# Calculate the size of the request line (method + URL + HTTP version)
# We add 4 to account for the 3 spaces and 1 end-of-line character that are part of the request line
request_line_size = sys.getsizeof(method) + sys.getsizeof(url) + sys.getsizeof('HTTP/1.1') + 4

# Calculate the size of the headers
headers_size = sum(sys.getsizeof(header) for header in headers.values())

# Calculate the size of the payload
data_size = sys.getsizeof(data)

# Add up the sizes to get the total size
total_size = request_line_size + headers_size + data_size
  1. Throughput Calculation: Calculate the throughput per second for iterations and bytes:

    For requests count: Throughput in count/second = (Total Count of Requests) / (Total Time in Seconds)

    For bytes: Throughput in bytes/second = (Total Data in Bytes) / (Total Time in Seconds)

    If the total time is less than a second, the throughput has to be scaled up proportionally to represent an estimate for a full second.
    For example, if the total time is 30 seconds (0.5 seconds) and there are 500 requests, the formula becomes 500 / 0.5 resulting in 1000 requests per second.

  2. Field Addition: After calculating the throughput values, add them to the respective JSON objects under the results field as requestThroughput and bytesThroughput.

  3. Metrics in DB: When saving the throughput values, make sure to name the metrics as follows: requestThroughputPerSecond and bytesThroughputSecond.

  4. Visualization: After adding the new fields to the JSON, add the metrics to the UI visualization, both the table and the graphs.

Acceptance Criteria

  1. The size of an HTTP request is correctly calculated by adding the size of the request line, headers, and the payload (for POST requests).

  2. The values for requestThroughput correctly represent the number of iterations that would be completed per second, given the total count of iterations and the total time for the test run. This should be calculated even if the total time is less than a second, in which case the throughput should be scaled up proportionally.

  3. The values for bytesThroughput correctly represent the volume of data that would be processed per second, given the total volume of data and the total time for the test run. This should be calculated even if the total time is less than a second, in which case the throughput should be scaled up proportionally.

  4. The new fields requestThroughput and bytesThroughput are correctly added to the respective JSON objects under the results field.

  5. These new metrics show up in the UI results table, and the graphs.

Tasks

Examine real-world scaling scenarios

Description

We want to observe different setups of scaling with different concurrency levels.
This will be done by simulating many clients, and observing server pods scaling up.
We would like to examine the impacts of PQC on such common use cases, measuring memory, CPU, throughput and more.

Acceptance Criteria

Real-world scaling scenarios

We need to test the following scenarios by minimum.

Low traffic applications

  1. Light Traffic E-commerce
    • Number of Requests: 100
    • Request Size: Small (e.g., 1KB)
    • Concurrency: 20
    • Scenario: Simulate an e-commerce application during off-peak hours.

Medium traffic applications

  1. IoT Device Control
    • Number of Requests: 300
    • Request Size: Tiny (e.g., 100 bytes)
    • Algorithms: Quantum-Safe, Hybrid, Classic
    • Concurrency: 30
    • Scenario: Evaluate the impact of algorithms on real-time control and monitoring of IoT devices.

  2. Healthcare Records Access
    • Number of Requests: 500
    • Request Size: Medium (e.g., 1MB)
    • Concurrency: 50
    • Scenario: Simulate a healthcare application for accessing patient records.

  3. Social Media Surge
    • Number of Requests: 1000
    • Request Size: Medium (e.g., 1MB)
    • Concurrency: 100
    • Scenario: Emulate a social media platform during a viral event or trending topic.

High traffic applications

  1. Online Banking Transactions
    • Number of Requests: 2000
    • Request Size: Medium (e.g., 1MB)
    • Concurrency: 200
    • Scenario: Assess the impact of algorithms on the security and speed of financial transactions.

  2. Ride-Sharing Peak Hours
    • Number of Requests: 3000
    • Request Size: Tiny (e.g., 100 bytes)
    • Algorithms: Quantum-Safe, Hybrid, Classic
    • Concurrency: 300
    • Scenario: Simulate a ride-sharing app during rush hours in a busy city.

Very high traffic applications

  1. Online Retail Peak Sale
    • Number of Requests: 5000
    • Request Size: Large (e.g., 10MB)
    • Concurrency: 500
    • Scenario: Simulate an online retail store during a peak shopping season or sale event.

  2. Video Streaming Service
    • Number of Requests: 10,000
    • Request Size: Large (e.g., 10MB)
    • Algorithms: Quantum-Safe, Hybrid, Classic
    • Concurrency: 1000
    • Scenario: Evaluate how well your application handles a surge in video streaming requests during a major live event.

  3. Content Delivery Network (CDN)
    • Number of Requests: 20,000
    • Request Size: Very Large (e.g., 100MB+)
    • Algorithms: Quantum-Safe, Hybrid, Classic
    • Concurrency: 2000
    • Scenario: Measure how algorithms affect the speed and efficiency of delivering very large media files during a global event.

Tasks

  • K8S mode - run qujata-curl as DaemonSet
  • Init app concurrency when app is loaded
  • Analyze api - support working with multiple curl pods
  • New metric for percentage of cpu usage
  • Analyze api - new parameter of concurrency
  • scale up nginx pods
    (More tasks to be added, this does not cover everything)

Test run parameter - Message size

Description

Add a new param to our experiments: message size. Send different sizes of payload in every cURL request to the NGINX server.
Display line charts to examine the effect of message size and PQ algorithms.

Acceptance Criteria

Issues

  • Issue one
  • issue two
  • issue three

Static web page displaying reports

Description

Implement and deploy a react-based portal with JSON visualizations of selected test suites to GitHub pages.
Static page showing static results exported from our findings.

Figma
Report Selector, No navigation
image

Acceptance Criteria

  1. React app spin up switch: The "View experiments page" should have 2 modes. The first one is for local run / environment run (already implemented), where you have a backend to run your experiments.
    The second mode is for the benchmarking reports, displaying our experiment runs from a folder in our main git branch.
    You need to implement the switch of how you spin up the React application (There is a PR on it).
    There is an environment variable that you should rely on, which will tell you in which mode you should spin up the app.
    In the static mode, you should spin up the app with a different main page: View experiments page with its static mode.

  2. View experiments page enhancement - Static mode: page report state must have include the following:

  • Report selector: Select a report by date according to the JSON files that are available.
  • Navigation: There should be no navigation buttons in the toolbar.

Design

Our static portal will be deployed from our main code build, but with a build version dedicated to "Pages".
This dedicated build will have a few tweaks:

  1. Different main page: The "Reports Page" will become the main page. It is very similar to a single experiment's page.
  2. Different data source: The idea is to have a folder in the main branch with our latest experiments and use it as a data source.

CI/CD

graph TD
    Dev("Developer") -->|Push changes| GH("GitHub Repo")
    GH -->|Trigger workflow| GA("GitHub Actions")

        GA --> BE("Environment Configuration")

    subgraph DockerBuild["Docker Build"]
        BE -->|BUILD_ENV=gh-pages| GHP("GitHub Pages Build Settings")
        BE -->|BUILD_ENV!=gh-pages| STD("Standard Build Settings")
    end

    DockerBuild --> IMG("Docker Image")

    IMG -->|Extract static content| SC("Static Content")
    SC -->|Deploy to GitHub Pages| Pages("GitHub Pages")

    classDef default fill:#ddf,stroke:#33a,stroke-width:2px, border-radius:10px;
    classDef build fill:#bbf,stroke:#33a,stroke-width:2px, border-radius:10px;
    classDef settings fill:#ccf,stroke:#33a,stroke-width:2px, border-radius:10px;
    class DockerBuild,IMG,SC,Pages build
    class BE,GHP,STD settings

Loading

Implementation steps

React Application Setup

  1. Environment Variables: Define necessary environment variables in .env files for different environments:

    • .env (local development):

      REACT_APP_MAIN_PAGE=MainPageLocal.js
      
    • .env.gh-pages (GitHub Pages):

      REACT_APP_MAIN_PAGE=MainPageGitHubPages.js
      
  2. Fetching from our GitHub repo reports folder

Fetch All File Names in a GitHub Repo Folder

const fetchFileNames = async () => {
  const apiUrl = `https://api.github.com/repos/${owner}/${repo}/contents/${path}?ref=${branch}`;
  try {
    const response = await fetch(apiUrl);
    const data = await response.json();
    const fileNames = data.filter(item => item.type === 'file').map(file => file.name);
    console.log('File names:', fileNames);
  } catch (error) {
    console.error('Error fetching file names:', error);
  }
};

Fetch a Single File by Name from a GitHub Repo

const fetchFileContent = async () => {
  const fileUrl = `https://raw.githubusercontent.com/${owner}/${repo}/${branch}/${path}/${fileName}`;
  try {
    const response = await fetch(fileUrl);
    const content = await response.text();  // Use .json() if you're fetching a JSON file
    setFileContent(content);
  } catch (error) {
    console.error('Error fetching file content:', error);
  }
};

For both snippets, you need to replace owner, repo, path, and fileName with actual values. The branch parameter defaults to 'main'.

  1. Dynamic Main Page Import: Use dynamic imports based on the environment variable to load the main page component:

    import React, { lazy, Suspense } from 'react';
    
    const MainPage = lazy(() => import(`./pages/${process.env.REACT_APP_MAIN_PAGE}`));
    
    function App() {
      return (
        <Suspense fallback={<div>Loading...</div>}>
          <MainPage />
        </Suspense>
      );
    }
    
    export default App;

Dockerfile Adjustments

Ensure the Dockerfile copies the correct environment file based on the build context:

# Example Dockerfile snippet

COPY . .
RUN if [ "$BUILD_ENV" = "gh-pages" ]; then \
        cp .env.gh-pages .env; \
    fi

# Continue with Docker build steps...

GitHub Actions Workflow

Update the workflow to build the Docker image with the BUILD_ENV argument and include steps for deploying to GitHub Pages:

jobs:
  build-and-deploy:
    runs-on: ubuntu-latest
    steps:
    - name: Checkout code
      uses: actions/checkout@v2

    - name: Build Docker image for GitHub Pages
      run: docker build ./portal --file ./portal/Dockerfile --tag my-website-image --build-arg BUILD_ENV=gh-pages

    # Steps for copying static content from the Docker image and deploying to GitHub Pages

This solution integrates the dynamic main page switch with the data fetching setup. The main page component is dynamically imported based on an environment variable, which allows for different main pages in different environments (e.g., local development vs. GitHub Pages). The data fetching function adapts to the data source (API or GitHub JSON files) and processes the response accordingly.

Issues

  • Automate build & deployment to Pages
  • Implement conditional build using env vars
  • Implement experiments conditional data source
  • Implement conditional main page
    (More tasks must be added, these tasks do not cover the entire feature)

message sizes list options api

Add a new api to list the message size options in bytes: 0 bytes, 1 byte, 2 bytes, 100 bytes, 1KB, 100KB, 200KB, 1MB, 2MB, 10MB.

Automatic support of OQS algorithms

The list of algorithms that we support should always be aligned with the algorithms specified in:
https://github.com/open-quantum-safe/oqs-provider#algorithms

Here is the list of algorithm family and the algorithms we currently support:

Algorithm Type Quantum Safe/Hybrid Round 1 Round 2 Round 3 Round 4 Supported algorithms
CRYSTALS-Dilithium SIG Quantum Safe Yes Yes Yes TBD -
CRYSTALS-Kyber KEM Quantum Safe Yes Yes Yes TBD kyber1024, kyber512, kyber768, p256_kyber512, p384_kyber768, x25519_kyber768
FALCON SIG Quantum Safe Yes Yes Yes TBD -
SPHINCS+ SIG Quantum Safe Yes Yes Yes TBD -
BIKE KEM Quantum Safe No No TBD TBD bikel1, bikel3, bikel5
Classic McEliece KEM Quantum Safe No No TBD TBD -
HQC KEM Quantum Safe No No TBD TBD hqc128, hqc192, hqc256
SIKE KEM Quantum Safe No No TBD TBD -
NTRU KEM Quantum Safe No No - - -
NTRU Prime KEM Quantum Safe No No - - -
Picnic SIG Quantum Safe No No - - -
Rainbow SIG Quantum Safe No No - - -
Saber KEM Quantum Safe No No - - -
FrodoKEM KEM Quantum Safe No No - - frodo1344aes, frodo1344shake, frodo640aes, frodo640shake, frodo976aes, frodo976shake
LAC KEM Quantum Safe No - - - -
LEDAcrypt KEM Quantum Safe No - - - -
LIMA KEM Quantum Safe No - - - -
LUOV SIG Quantum Safe No - - - -
MQDSS SIG Quantum Safe No - - - -
NewHope KEM Quantum Safe No - - - -
qTESLA SIG Quantum Safe No - - - -
ROLLO KEM Quantum Safe No - - - -
RQC KEM Quantum Safe No - - - -
ThreeBears KEM Quantum Safe No - - - -
XMSS SIG Quantum Safe No - - - -
xXE KEM Quantum Safe No - - - -
KEMTLS KEM Hybrid - No - - -
OQS KEM Hybrid - No - - -
pqNTRUSign SIG Hybrid - No - - -
P-256 KEM Classic - - - - prime256v1
P-384 KEM Classic - - - - secp384r1

Experiments scheduler

Description

Implement a scheduler that will set up test runs. The purpose of it is running multiple predefined experiments, and let them run on the background.

Acceptance Criteria

Issues

  • Issue one
  • issue two
  • issue three

Metrics collection - cURL TLS Handshake, Data Transfer Time

Description

Collect new metrics - TLS Handshake, Data Transfer Time.

Read this blog post for help:
https://blog.josephscott.org/2011/10/14/timing-details-with-curl/

  1. The handshake time provides insight into the initial connection establishment duration. Post-quantum cryptography (PQC) might increase this time due to the potentially larger key sizes and computational intensity of the algorithms.
    image
  2. Data Transfer Time
    This metric indicates the time taken to send and receive data after the handshake. While PQC might not have a significant impact on this, it's still essential to measure for a complete performance picture. This can be obtained similarly to the handshake time, but for the actual data transfer phase.

Acceptance Criteria

For all supported Qujata algorithms, support the following

  1. Collect and record all relevant data on everything related to connection setup
  2. Collect and record all relevant data on data transfer, after connection was made, although this should, in theory, be the same as today

Tasks

Be the first to suggest how to break this feature into individual tasks

  • Task 1
  • Task 2

K8S mode - Run qujata-curl as DaemonSet

currently qujata-curl runs as a single pod on the k8s cluster
in order to support better concurrency, qujata-curl should run as DaemonSet on all nodes in the K8S cluster

Algorithms comparison visualization

Description

Create a table and graphs out of the JSON in the researcher Qujata portal.
Implement it according to the metrics and params that are already available.

Summary table: Algorithm | Iterations | Message size | CPU | Memory | Error rate | Bytes throughput | Messages throughput | TLS handshake time

Bar chart graphs:

X = Algorithm + Iteration + Message size, Y = Metric.

Total graphs = number of metrics count(testRuns[x].results])
Each combination of iterations + message size + algorithm will be represented as another “Bar” in the graph.
Example:

Algorithm1.cpu = 25.5  (1000, 20) 
Algorithm1.cpu = 40  (1000, 1024) 
Algorithm2.cpu = 30.5  (1000, 20)
Algorithm2.cpu = 50  (1000, 1024) 

Multi series graphs:
Line per algorithm, Y = Metric (e.g. Avg CPU), X = message size
Line per algorithm, Y = Metric (e.g. Avg CPU), X = number of iterations
Total graph combinations = numbers of metrics * 2 (1 message size + 1 for iterations)

Filters:
Algorithm(s) by name
Optional: Algorithm(s) by family (e.g. all BIKELs, all FRODOs.)
Optional: Algorithm(s) by NIST round (e.g. R1, R3, R5.)
Number of iterations
Message size

Acceptance Criteria

  1. Summary table with line for each test run
  2. 4 graphs, in two rows, per design in Figma (external contributors, please reach out if like to take this task)
    a. Each graph should allow the users to define the X and Y axis, as well as chart type, currently, two options only, line or bar charts
    b. See description above for more details
    c. Default should be
    1. CPU (combined for Server and Client) vs. number of iterations
    2. CPU (combined for Server and Client) vs. message size
    3. Memory (combined for Server and Client) vs. number of iterations
    4. Memory (combined for Server and Client) vs. message size
  3. Create filters that should apply for table as well as all graphs
    a. Filters button should be always visible
    b. Filter the following: Algorithm(s) by name, Algorithm(s) by family (e.g. all BIKELs, all FRODOs. optional for now.), Algorithm(s) by NIST round (e.g. R1, R3, R5. optional for now), Operating system, Number of iterations, Message size

Tasks

  • Summary table: Algorithm | Iterations | Message size | CPU | Memory | Error rate | Bytes throughput | Messages throughput | TLS handshake time
  • Bar chart graphs: X = Algorithm, Y = Metric
  • Multi series line chart graphs: Line per algorithm, Y = Metric (e.g. CPU), X = Message size
  • Filters: Should apply for table and the graphs alike. see details in acceptance criteria
  • Data visualization - backend support
  • [Homepage Tab] Experiment (Test Suite) Page
  • [All-Experiments Tab] All Experiments Page
  • [Homepage Tab] Latest Experiment - history view

Support additional platforms

Description

Support multiple platforms on which our benchmarking orchestration can run:

  • Windows
  • Additional Linux CPU architectures

Analyze difference in results between the platforms

Acceptance Criteria

Issues

  • #91
  • issue two
  • issue three

Test run parameters - Message size

Description

Currently, the entire application is working without the message size parameter.

As a backend developer, we need to make sure the application is making her analyze process including the message size.

As a frontend developer, it needs to be presented on the following section:

  • Home page:
    • New test run parameter, including custom input.
  • "Experiment" page:
    • New column called "Message Size (KB)" inside the experiment table.
    • Add the message size to the "selected columns" popup section.
    • Add the message size to the charts tooltip and charts drop-down option presented.
  • "All Experiments" page:
    • New column called "Message Size (KB)" inside the all-experiments table.

Predefined list of options in the UI
0 bytes, 1 byte, 2 bytes, 100 bytes, 1KB, 100KB, 200KB, 1MB, 2MB, 10MB
These values will be provided in bytes from the backend.
The user should be able to add more options as needed. (In bytes)
The dropdown should take the bytes that are given as input and show them in the correct unit.
The number itself in the dropdown should not exceed 3 digits. (500 KB is ok, 1024KB is not OK. Should be 1 MB).
Below is a JS library + usage snippet that can help with that.

const filesize = require('filesize');

function convertBytesToHumanReadable(bytesValue) {
    /**
     * Convert bytes to human-readable units with a maximum of 3 digits.
     */
    return filesize(bytesValue, {round: 3});
}

// Example usage:
const bytesValue1 = 500 * 1024; // 500 KB
const bytesValue2 = 1024 * 1024; // 1024 KB
console.log(convertBytesToHumanReadable(bytesValue1)); // Output: 500.000 KB
console.log(convertBytesToHumanReadable(bytesValue2)); // Output: 1.000 MB

Acceptance Criteria

We should consider the "message size" parameter throughout the entire application.

Tasks

Metrics collection - Error rate

Description

Implement "error rate" metric. Each experiment must log the number of runs that failed.

Acceptance Criteria

  1. Log number of runs that failed for each test run (every experiment, or test suite, contains one or more test runs. each test run include a combination of algorithm, number of iterations and message size)
  2. Calculate the error rate from the number of failed run out of the total number of iterations for a specific test run

Tasks

Be the first to suggest how to break this feature into individual tasks

  • Task 1
  • Task 2

UI Enhancements

Description

  1. Add measurement units to the results table as shown on [Figma]:(https://www.figma.com/file/6kPffSBoX0ymy7XnM0iIog/Qujata?type=design&node-id=1787-3306&mode=design&t=dreKjW5Yeis74zYM-0)

image

  1. Dynamic graph update:
    When "Message Size" is selected as the X Axis, the user must choose a fixed "Number of Iterations".
    When "Number of Iterations" is selected as the X Axis, the user must choose a fixed "Message Size".
    image

Acceptance Criteria

  • Measurement units are correctly set in the results table.
  • The dynamic graphs must show an additional dropdown to choose a fixed message size/iterations, to allow the user to gain valuable insights from the graphs.

[Curl] Add message size parameter

  • Add message size parameter to the request
  • Generate body according to the message size
  • Send the request to nginx with the generated data

Reflect run status to user

Description

Poll the backend to check the current run's status and better reflect to the user the progress of it.

Acceptance Criteria

Issues

  • Issue one
  • issue two
  • issue three

[Portal] Add message_size parameter

As a frontend developer, it needs to be presented on the following section:

  • Home page:
    • New test run parameter, including custom input.
  • "Experiment" page:
    • New column called "Message Size (KB)" inside the experiment table.
    • Add the message size to the "selected columns" popup section.
    • Add the message size to the charts tooltip and charts drop-down option presented.
  • "All Experiments" page:
    • New column called "Message Size (KB)" inside the all-experiments table.
    • "Duplicate" into homepage through all-experiments.

New metric for percentage of cpu usage

Currently the cpu metric represents cpu cores usage
this metric should be changed to CPU_CORES_USAGE
and a new metric CPU_PRECENTAGE_USAGE should be created

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.