Git Product home page Git Product logo

dashboard's Introduction

workflow code style example artifacts example artifacts Stable Github All Releases Oakestra

Oakestra is an orchestration platform designed for Edge Computing. Popular orchestration platforms such as Kubernetes or K3s struggle at maintaining workloads across heterogeneous and constrained devices. Oakestra is build from the ground up to support computation in a flexible way at the edge.

๐ŸŒ Read more about the project at: oakestra.io

๐Ÿ“š Check out the project wiki at: oakestra.io/docs


Table of Contents


๐ŸŒณ Get Started

Before being able to deploy your first application, we must create a fully functional Oakestra Root ๐Ÿ‘‘, to that we attach the clusters ๐Ÿชต, and to each cluster we attach at least one worker node ๐Ÿƒ.

In this get-started guide, we place everything on the same machine. More complex setups can be composed following our wiki at oakestra.io/docs/getstarted/get-started-cluster.

Requirements

  • Linux machine with iptables
  • Docker + Docker Compose v2

Your first cluster ๐Ÿชต

Let's start our Root, the dashboard, and a cluster orchestrator on your machine. We call this setup 1-DOC which stands for 1 Device One Cluster, meaning that all the components are deployed locally.

curl -sfL oakestra.io/getstarted.sh | sh - 

You can turn off the cluster using docker compose -f ~/oakestra/1-DOC.yaml down

Your first worker node ๐Ÿƒ

Download and install the Node Engine and the Network Manager:

curl -sfL https://raw.githubusercontent.com/oakestra/oakestra/develop/scripts/InstallOakestraWorker.sh | sh -  

Configure the Network Manager by editing /etc/netmanager/netcfg.json as follows:

{
  "NodePublicAddress": "<IP ADDRESS OF THIS DEVICE>",
  "NodePublicPort": "<PORT REACHABLE FROM OUTSIDE, use 50103 as default>",
  "ClusterUrl": "<IP Address of cluster orchestrator or 0.0.0.0 if deployed on the same machine>",
  "ClusterMqttPort": "10003"
}

Start the NetManager on port 6000

sudo NetManager -p 6000

On a different shell, start the NodeEngine with the -6000 paramenter to connect to the NetManager.

sudo NodeEngine -a <Cluster Orchestrator IP Address>

If you see the NodeEngine reporting metrics to the Cluster...

๐Ÿ† Success!

โœจ๐Ÿ†•โœจ If the worker node machine has KVM installed and it supports nested virtualization, you can add the flag -u=true to the NodeEngine startup command to enable Oakestra Unikernel deployment support for this machine.

Your first application ๐Ÿ’ป

Let's use the dashboard to deploy you first application.

Navigate to http://SYSTEM_MANAGER_URL and login with the default credentials:

  • Username: Admin
  • Password: Admin

Deactivate the Organization flag for now. (Not like it is depicted in the reference image)

Add a new application, and specify the app name, namespace, and description. N.b.: Max 30 alphanumeric characters. No symbols.

Then, create a new service using the button.

Fill the form using the following values: N.b.: Max 30 alphanumeric characters. No symbols. image

Service name: nginx
Namespace: test
Virtualization: Container
Memory: 100MB
Vcpus: 1
Port: 80
Code: docker.io/library/nginx:latest

Finally, deploy the application using the deploy button.

Check the application status, IP address, and logs.

image

image

The Node IP field represents the address where you can reach your service. Let's try to use our browser now to navigate to the IP 131.159.24.51 used by this application.

image

๐ŸŽฏ Troubleshoot

  • 1-DOC startup sends a warning regarding missing cluster name or location.

    After exporting the env variables at step 1, if you're using sudo with docker-compose, remember the -E parameter.

  • NetManager bad network received

    Something is off at the root level. Most likely, the cluster network component is not receiving a subnetwork from the root. Make sure all the root components are running.

  • NetManager timeout

    The cluster network components are not reachable. Either they are not running, or the config file /etc/netmanager/netcfg.json must be updated.

  • Deployment Failed: NoResourcesAvailable/NoTargetCluster

    There is no worker node with the specified capacity or no worker node deployed at all. Are you sure the worker node startup was successful?

  • Wrong Node IP displayed

    The node IP is from the cluster orchestrator perspective so far. If it shows a different IP than expected, it's probably the IP of the interface used to reach the cluster orchestrator.

  • Other stuff? Contact us on Discord!

๐Ÿ› ๏ธ How to create a multi-cluster setup

Root Orchestrator

Initialize a standalone root orchestrator.

On a Linux machine first, install Docker and Docker compose v2.

First configure the address used by the dashboard to reach your APIs by running:

export SYSTEM_MANAGER_URL=<Address of current machine>

To run the Root orchestrator from the pre-compiled images:

  • (optional) setup a repository branch e.g., export OAKESTRA_BRANCH=develop, default branch is main.
  • (optional) setup comma-separated list of custom override files for docker compose e.g., export OVERRIDE_FILES=override-alpha-versions.yaml
  • Download setup and startup the root orchestrator simply running:
curl -sfL https://raw.githubusercontent.com/oakestra/oakestra/develop/scripts/StartOakestraRoot.sh | sh - 

If you wish to build the Root Orchestrator by yourself from source code, clone the repo and run:

cd root_orchestrator/
docker-compose up --build 

The following ports are exposed:

  • Port 80 - Dashboard
  • Port 10000 - System Manager (It also needs to be accessible from the Cluster Orchestrator)

Cluster Orchestrator

For each cluster, we need at least a machine running the clsuter orchestrator.

  • Log into the target machine/vm you intend to use
  • Install Docker and Docker compose v2.
  • Export the required parameters:
## Choose a unique name for your cluster
export CLUSTER_NAME=My_Awesome_Cluster

## Optional: Give a name or geo coordinates to the current location. Default location set to coordinates of your IP
#export CLUSTER_LOCATION=My_Awesome_Apartment

## IP address where this root component can be reached to access the APIs
export SYSTEM_MANAGER_URL=<IP address>
# Note: Use a non-loopback interface IP (e.g. any of your real interfaces that have internet access).
# "0.0.0.0" leads to server issues

You can run the cluster orchestrator using the pre-compiled images:

  • (optional) setup a repository branch e.g., export OAKESTRA_BRANCH=develop, default branch is main.
  • (optional) setup comma-separated list of custom override files for docker compose e.g., export OVERRIDE_FILES=override-alpha-versions.yaml
  • (optional) setup a custom cluster location e.g., export CLUSTER_LOCATION=<latitude>,<longitude>,<radius>, default location is automatically inferred from the public IP address of the machine.
  • Download and start the cluster orchestrator components:
curl -sfL https://raw.githubusercontent.com/oakestra/oakestra/develop/scripts/StartOakestraCluster.sh | sh - 

If you wish yo build the cluster orchestrator yourself simply clone the repo and run:

export CLUSTER_LOCATION=My_Awesome_Apartment #If building the code this is not optional anymore
cd cluster_orchestrator/
docker-compose up --build 

The following ports are exposed:

  • 10100 Cluster Manager (needs to be accessible by the Node Engine)

Worker nodes

For each worker node you can either use the pre-compiled binaries (check ๐ŸŒณ Get Started ) as usual or compile them on your own.

Build your node engine

Requirements

  • Linux OS with the following packages installed (Ubuntu and many other distributions natively supports them)
    • iptable
    • ip utils
  • port 50103 available

Compile and install the binary with:

cd go_node_engine/build
./build.sh
./install.sh $(dpkg --print-architecture)

Then configure the NetManager and perform the startup as usual.

N.b. each worker node can now be configured to work with a different cluster.
N.b. you can disable the Overlay Newtork (and therefore avoid using the NetManager) using the -n -1 flag at NodeEngine startup.

๐ŸŽผ Deployment descriptor

Together with the application, it's possible to perform a deployment by passing a deployment descriptor (or SLA) in .json format to the APIs or the frontend.

Since version 0.4, Oakestra (previously, EdgeIO) uses the following format for a deployment descriptor format.

E.g.: deploy_curl_application.yaml

{
  "sla_version" : "v2.0",
  "customerID" : "Admin",
  "applications" : [
    {
      "applicationID" : "",
      "application_name" : "clientsrvr",
      "application_namespace" : "test",
      "application_desc" : "Simple demo with curl client and Nginx server",
      "microservices" : [
        {
          "microserviceID": "",
          "microservice_name": "curl",
          "microservice_namespace": "test",
          "virtualization": "container",
          "cmd": ["sh", "-c", "curl 10.30.55.55 ; sleep 5"],
          "memory": 100,
          "vcpus": 1,
          "vgpus": 0,
          "vtpus": 0,
          "bandwidth_in": 0,
          "bandwidth_out": 0,
          "storage": 0,
          "code": "docker.io/curlimages/curl:7.82.0",
          "state": "",
          "port": "",
          "added_files": [],
          "constraints":[]
        },
        {
          "microserviceID": "",
          "microservice_name": "nginx",
          "microservice_namespace": "test",
          "virtualization": "container",
          "cmd": [],
          "memory": 100,
          "vcpus": 1,
          "vgpus": 0,
          "vtpus": 0,
          "bandwidth_in": 0,
          "bandwidth_out": 0,
          "storage": 0,
          "code": "docker.io/library/nginx:latest",
          "state": "",
          "port": "80:80/tcp",
          "addresses": {
            "rr_ip": "10.30.55.55"
          },
          "added_files": []
        }
      ]
    }
  ]
}

This deployment descriptor example describes one application named clientsrvr with the test namespace and two microservices:

  • nginx server with test namespace, namely clientsrvr.test.nginx.test
  • curl client with test namespace, namely clientsrvr.test.curl.test

This is a detailed description of the deployment descriptor fields currently implemented:

  • sla_version: the current version is v0.2
  • customerID: id of the user, default is Admin
    • application list, in a single deployment descriptor is possible to define multiple applications, each containing:

      • Fully qualified app name: A fully qualified name in Oakestra is composed of
        • application_name: unique name representing the application (max 30 alphanumeric characters)
        • application_namespace: namespace of the app, used to reference different deployment of the same application. Examples of namespace name can be default or production or test (max 30 alphanumeric characters)
        • applicationID: leave it empty for new deployments, this is needed only to edit an existing deployment.
      • application_desc: Short description of the application
      • microservice list, a list of the microservices composing the application. For each microservice the user can specify:
        • microserviceID: leave it empty for new deployments, this is needed only to edit an existing deployment.
        • Fully qualified service name:
          • microservice_name: name of the service (max 30 alphanumeric characters)
          • microservice_namespace: namespace of the service, used to reference different deployment of the same service. Examples of namespace name can be default or production or test (max 30 alphanumeric characters)
        • virtualization: currently the supported virtualization are container or (โœจ๐Ÿ†•โœจ) unikernel
        • cmd: list of the commands to be executed inside the container at startup or the unikernel parameters
        • environment: list of the environment variables to be set, E.g.: ['VAR=fOO'].
        • vcpu,vgpu,memory: minimum cpu/gpu vcores and memory amount needed to run the container
        • vtpus: currently not implemented
        • code: public link of OCI container image (e.g. docker.io/library/nginx:latest) or (โœจ๐Ÿ†•โœจ) link to unikernel image in .tar.gz format (e.g. http://<hosting-url-and-port>/nginx_x86.tar.gz).
        • storage: minimum storage size required (currently the scheduler does not take this value into account)
        • bandwidth_in/out: minimum required bandwidth on the worker node. (currently the scheduler does not take this value into account)
        • port: port mapping for the container in the syntax hostport_1:containerport_1[/protocol];hostport_2:containerport_2[/protocol] (default protocol is tcp)
        • addresses: allows to specify a custom ip address to be used to balance the traffic across all the service instances.
          • rr_ip: [optional field] This field allows you to setup a custom Round Robin network address to reference all the instances belonging to this service. This address is going to be permanently bounded to the service. The address MUST be in the form 10.30.x.y and must not collide with any other Instance Address or Service IP in the system, otherwise an error will be returned. If you don't specify a RR_ip and you don't set this field, a new address will be generated by the system.
        • โœจ๐Ÿ†•โœจ one-shot: using the keyword "one_shot": true in the SLA is possible to deploy a one shot service, a service that when terminating with exit status 0 is marked as completed and not re-deployed.
        • constraints: array of constraints regarding the service.
          • type: constraint type
            • direct: Send a deployment to a specific cluster and a specific list of eligible nodes. You can specify "node":"node1;node2;...;noden" a list of node's hostnames. These are the only eligible worker nodes. "cluster":"cluster_name" The name of the cluster where this service must be scheduled. E.g.:
      "constraints":[
                  {
                    "type":"direct",
                    "node":"xavier1",
                    "cluster":"gpu"
                  }
                ]
      

Dashboard SLA descriptor

From the dashboard you can create the application graphically and set the services via SLA. In that case you need to submit a different SLA, contianing only the microservice list, e.g.:

{
      "microservices" : [
        {
          "microserviceID": "",
          "microservice_name": "nginx",
          "microservice_namespace": "test",
          "virtualization": "container",
          "cmd": [],
          "memory": 100,
          "vcpus": 1,
          "vgpus": 0,
          "vtpus": 0,
          "bandwidth_in": 0,
          "bandwidth_out": 0,
          "storage": 0,
          "code": "docker.io/library/nginx:latest",
          "state": "",
          "port": "",
          "addresses": {
            "rr_ip": "10.30.55.55"
          },
          "added_files": [],
          "constraints": []
        }
      ]
}

๐Ÿฉป Use the APIs to deploy a new application and check clusters status

Login

After running a cluster you can use the debug OpenAPI page to interact with the apis and use the infrastructure.

connect to <root_orch_ip>:10000/api/docs

Authenticate using the following procedure:

  1. locate the login method and use the try-out button try-login
  2. Use the default Admin credentials to login execute-login
  3. Copy the result login token token-login
  4. Go to the top of the page and authenticate with this token auth-login auth2-login

Register an application and the services

After you authenticate with the login function, you can try out to deploy the first application.

  1. Upload the deployment description to the system. You can try using the deployment descriptor above. post app

The response contains the Application id and the id for all the application's services. Now the application and the services are registered to the platform. It's time to deploy the service instances!

You can always remove or create a new service for the application using the /api/services endpoints.

Deploy an instance of a registered service

  1. Trigger a deployment of a service's instance using POST /api/service/{serviceid}/instance

each call to this endpoint generates a new instance of the service

Monitor the service status

  1. With GET /api/aplications/<userid> (or simply /api/aplications/ if you're admin) you can check the list of the deployed application.
  2. With GET /api/services/<appid> you can check the services attached to an application
  3. With GET /api/service/<serviceid> you can check the status for all the instances of

Undeploy

  • Use DELETE /api/service/<serviceid> to delete all the instances of a service
  • Use DELETE /api/service/<serviceid>/instance/<instance number> to delete a specific instance of a service
  • Use DELETE /api/application/<appid> to delete all together an application with all the services and instances

Cluster Status

  • Use GET /api/clusters/ to get all the registered clusters.
  • Use GET /api/clusters/active to get all the clusters currently active and their resources.

Unikernel

It is also possible to use Unikernels by changing the virtulization in of the microservice

{
	"sla_version": "v2.0",
	"customerID": "Admin",
	"applications": [{
		"applicationID": "",
		"application_name": "nginx",
		"application_namespace": "test",
		"application_desc": "Simple demo of an Nginx server Unikernel",
		"microservices": [{
			"microserviceID": "",
			"microservice_name": "nginx",
			"microservice_namespace": "test",
			"virtualization": "unikernel",
			"cmd": [],
			"memory": 100,
			"vcpus": 1,
			"vgpus": 0,
			"vtpus": 0,
			"bandwidth_in": 0,
			"bandwidth_out": 0,
			"storage": 0,
			"code": "https://github.com/Sabanic-P/app-nginx/releases/download/v1.0/kernel.tar.gz",
			"arch": ["amd64"],
			"state": "",
			"port": "80:80",
			"addresses": {
				"rr_ip": "10.30.30.26"
			},
			"added_files": []
		}]
	}]
}

Differences to Container Deployment:

  • virtualization: set to unikernel
  • code: Specifies a the remote Unikernel accessible via http(s). There can be multiple Unikernels in the same string seperated via ",".
  • arch: Specifies the architecture of the Unikernel given in code. The order of architectures must match the order of Unikernles given via the code field

๐Ÿ•ธ๏ธ Networking

The network component is maintained in: https://www.oakestra.io/docs/networking

๐Ÿ“ˆ Monitoring

The infrastructure monitoring stack provided is built on Grafana OSS toolset. It monitors both root and cluster services for comprehensive visibility. Default Grafana Dashboard credentials can be used:

  • Username: admin
  • Password: admin

To access the provisioned dashboards at:

  • Root Infrastructure: <root_orch_ip>:3000
  • Cluster Infrastructure: <cluster_orch_ip>:3001

More details about monitoring stack can be found in config/README.md.

dashboard's People

Contributors

danimair9 avatar giobart avatar marianievas avatar marinovl7 avatar nitindermohan avatar tfphoenix avatar

Watchers

 avatar

dashboard's Issues

[Enhancment] Hide user password in create/edit new user

Short


The password input field in create/edit user is currently not of type="password" and shows the text of the password. I personally think, that it is a good design decision to hide the password.

Proposal


Make the input field of type="password" and add a icon/button to the most right of the container which will allow the user to toggle the written password.

Ratio


I think this is a good UI/UX experience and proves the user, that the password is something secure.

Impact


Only frontend

Development time


~30 mins.

Status


I want to develop this myself and will start right away

Checklist


  • Discussed
  • Implemented
  • Tested

If no API_ADDRESS is given use remote address as default

Description

If I set the frontend web server on the same machine as my API server, I still need to specify the variable API_ADDRESS, which is annoying.

In that case, if the dashboard is deployed in, e.g., oakestra.dashboard.com, my default API_ADDRESS should be oakestra.dashboard.com:10000/api, unless I customize API_ADDRESS to point somewhere else.

Ajust User Roles

Description

Currently the roles of a user are stored like this:

"roles": [ { "name": "Admin", "description": "This is the admin role" }, { "name": "Application_Provider", "description": "This is the app role" }, { "name": "Infrastructure_Provider", "description": "This is the infra role" } ],

change that to this structure:

"roles": ["This is the admin role", "Application_Provider", "Infrastructure_Provider" ]

So that data is not sent back and forth unnecessarily and the structure is simplified.

Change to NGX-Admin Template

To make the Dashboard look a bit better and more professional and to have a consistent design language, the Dashboard should use a template.

One option is the NGX Admin template (https://akveo.github.io/ngx-admin/). It is open source has a large community and supports all features we want.

The following components may need to be adjusted:

  • help
  • app
  • cluster
  • charts
  • dev-home
  • dialog-graph.connection-view
  • graph
  • infrastructure
  • app-list
  • cluster-list
  • navbar
  • not-found
  • add-member
  • edit-organization
  • member-item
  • list-organization
  • organization
  • profile
  • settings
  • addresses
  • arguments
  • connectivity
  • cluster-constraints
  • geo-constraints
  • latency-constraints
  • constraints
  • dialog-connection-setting-view
  • file-select
  • file-upload
  • requirements
  • service-info
  • sla-form
  • users
  • login
  • register
  • reset-password
  • notification

Add views for Organization

By adding organizations, additional views must be created. As well as already existing views must be changed somewhat.

View for the following items are needed:

  • Logging in as admin.
  • Logging in as a user into an organization.
  • List of all organizations
  • Create new organization
  • Add user to an organization.

Add Redux store

Add NGRX redux store to manage the entire application state with a global store.
This could be compared to a simple in-memory database in the browser.

This especially simplifies the handling of the same data in different components.

Benefits: https://ngrx.io/guide/store/why

Frontend cluster management panel

Short
New feature that will enable adding clusters and attaching them to the Root Orchestrator in a secure manner, and managing them from the dashboard (edit / delete)

Proposal
A new item to manage user's clusters will be added in the navigation bar of the Dashboard. A dialog will appear asking for the necessary cluster information. Then a JWT temporary secret key will be generated for the propuse of attachment to the Root.
Also, we will have a main container where all added clusters will be showed and where the management of them will take place.

Ratio
Implementing this feature will enable the user to manage his clusters from the Dashboard interface.

Impact

Development time
15 days

Status
Under development

Checklist

  • Discussed
  • Documented
  • Implemented
  • Tested

Preselect an application

Bug Description

If an application was selected before and then the page is reloaded, the application is no longer selected and you have to click on it again.

Reproduction Steps

  1. Select an application
  2. Reload the page

Expected Behavior

The selected application stays selected

Actual Behavior

The selection is gone

Screenshots

image

Implement GitHub workflow for automated front-end tests

  • Ensure Karma configuration is complete and correct
  • Add some UTs, that will be automatically executed by the workflow (i.e. Karma runner)
  • Implement GitHub workflow for automated tests
  • Ensure the implemented workflow works properly (e.g. by using ACT)
  • Optional: Integrate/Configure E2E tests (see this)

PS: This is what I consider, in my opinion, that should be done in order to automate tests on the front-end. Feel free to discuss this issue, and offer your input accordingly

Also see: #35

No cluster active with capacity

When a new instance cannot be created because there is no active cluster with capacity, the status NoActiveClusterWithCapacity is not showing up anymore

Grafana Dashboard Button

What do we initially need?

For infrastructure providers, it would be convenient to access the Grafana dashboard directly from the frontend.

  • We need a button pointing to <system_manager_address>:3000 from the side menu
  • The button should be visible to infrastructure providers only

Screenshot 2023-11-15 at 16 50 20

Status

  • Discussed
  • Implemented
  • Tested

Current Alpha Version Dashboard Can't Deploy Services

Bug Description

Oakestra dashboard setup with the command

sudo -E docker compose -f run-a-cluster/1-DOC.yaml -f run-a-cluster/override-alpha-versions.yaml up

isn't able to deploy services like the Oakestra repository README example Ngnix service, preventing user from deploying services in a similar way as with the dashboard stable version setup with the command

sudo -E docker compose -f run-a-cluster/1-DOC.yaml up

Computer Resources

OS: Ubuntu 22.04.3 LTS 64-bit
CPU: Intelยฎ Coreโ„ข i5-8250U CPU @ 1.60GHz ร— 8
Memory: 16.0ย GiB
GPU: Mesa Intelยฎ UHD Graphics 620 (KBL GT2)

Reproduction Steps

  1. Have a Ubuntu-22.04 laptop given by University of Helsinki with properly configured Docker engine or Docker desktop (True extent of custom configuration unknown)
  2. Proceed with the setup as normal, but use the mentioned Alpha version command
  3. Check that api/clusters and api/clusters/active show active node
  4. Create an application with namespace test and name test
  5. Create a Nginx service with a description similar to the one shown in the Oakestra repository README (lines 137-149):
Service name: nginx
Namespace: test
Virtualization: Container
Memory: 100MB
Vcpus: 1
Vgpus: 0
Vtpus: 0
Bandwidth in/out: 0
Storage: 0
Port: 6080:80
Code: docker.io/library/nginx:latest
  1. Try to deploy the Nginx service

Expected Behaviour

After deploying service, the service should be scheduled as normal as in the stable version.

Actual Behaviour

After waiting, nothing happens regardless how many times user tries to deploy service, leaving the service say that no instances have been deployed.

Oakestra_Alpha_Service_Deploy_1

Logs

Oakestra nodeengine logs 1
Oakestra netmanager logs 1
Oakestra alpha logs 1

Update

If I want to deploy a services using the Alpha version Dashboard, I can do it with the following actions:

export CLUSTER_NAME=example # No effect
export CLUSTER_LOCATION=48.26280440430051,11.66904127701312,2000 # No effect
export SYSTEM_MANAGER_URL=(device public IP) # Easiest to check in laptops using IP checker tool or ifconfig

sudo -E docker compose -f run-a-cluster/1-DOC.yaml -f run-a-cluster/override-alpha-versions.yaml up # Wait until HTTP requests are seen. If logs start to hang, CTRL + C and run the command again as many times it takes, usually 2-3 tries
Create another terminal and run sudo nano /etc/netmanager/netcfg.json # Set NodePublicAddress and ClusterUrl to be device public IP
sudo NetManager -p 6000
Create another terminal and run sudo NodeEngine -n 6000 -p 10100 -a (device public ip)
Open a dashboard and /api/docs in a browser
CTRL + C # After setup

sudo -E docker compose -f run-a-cluster/1-DOC.yaml up # Wait until HTTP requests are seen.  If logs start to hang, CTRL + C and run the command again as many times it takes, usually 2 tries
Use the same browser to view the dashboard in http://(device public ip)
Check if /api/clusters and /api/clusters/active in http:(device public ip):10000/api/docs show active nodes to be 1

Create application with test name and test namespace
Create a ngnix service as seen in Oakestra README, but change the port to 6080:80
Check if ngnix works by going to http://(device public ip):6080

Oakestra_Alpha_Service_Deploy_2

Oakestra_Alpha_Service_Deploy_3

Unfortunately logs cannot be seen in service details using this method:

Oakestra_Alpha_Service_Deploy_4

Additionally, the dashboard might change back to the stable dashboard, when either device is restarted or applications are removed and browser restarted or changed:

Oakestra_Alpha_Service_Deploy_5

Oakestra_Alpha_Service_Deploy_6

In the last picture I recreated application and Nginx service after opening the dashboard in Mozilla Firefox. I think this implies that the Alpha version is somehow cached into the browser, which it utilizes, when the change to Stable version happens.

Refactor graph in the frontend.

In a previous version of the frontend there was a graph. Here all services in this application are displayed and you can create connections between the services and define different requirements for the connection.

image

However, this function is currently written in javascript and not well built into the frontend. It needs refactoring.

Add Infrastructure View

In Oakestra, there are generally three types of users.

The Developer, the Infrastructure Provider and the Admin.

The view of the Infrastructure Provider must still be created for this.

Ajust Form css

Styling in the large input form does not always fit. The colors, padding, and buttons should be adjusted to ensure a more professional look and feel.

  • Change button colors to main colors (#4a7083)
  • Add padding to the fields
  • Improve form validation or feedback on wrong input

Improve the status view

Bug Description

The status view of a service should be improved

Reproduction Steps

  1. Click on the status of a service

Expected Behavior

The current status and the histroic values of the status should be displayed in a graph.

Actual Behavior

Only the current actual values of the service are displayed.

Screenshots

image

Improve code quality

Go through all the current code and look for improvements.

  • remove inline css
  • add interfaces
  • updated node version and remove unused dependencies
  • remove unused css
  • split large components
  • Customize generated SLA
  • simplify awkwardly written functions

Default API Address Environment Variable

Problem:

When starting the dashboard, an environment variable needs to be specified to ensure the dashboard accesses the correct API. However, if the dashboard and the root orchestrator are intended to run on the same machine, there should be no need for manual configuration of this variable. It should automatically default to the values of localhost and Port 10000.

Expected Behavior:

When the dashboard is launched on the same machine as the root orchestrator, the API address should default to "localhost" and Port "10000" without the need for manual configuration.

Steps to Reproduce:

Launch the dashboard and the root orchestrator on the same machine.
Observe that the API address is not automatically set to "localhost" and Port "10000."

Actual Behavior:

When launching the dashboard on the same machine as the root orchestrator, the API address requires manual configuration.

Development time:

1h

Checklist

  • Discussed
  • Documented
  • Implemented
  • Tested

Refactor send email.

Bug Description

When creating a new user or changing the password, a mail should be sent to the user.

Reproduction Steps

  1. Log in as admin
  2. Creat new user
  3. The new user should then get a mail

Expected Behavior

The user should get a mail

Actual Behavior

The user does not get the mail.

Bug: new service creation fails due to missing application description

Short

It is currently possible to create a new application without a description.
Adding new services to this application will throw errors because the BE expects to find a description for the application.

Proposal

In more detail:
~ 1 min long demo video that showcases this bug
https://github.com/oakestra/dashboard/assets/65814168/78b8afa8-a421-44de-b961-cd348fa99529

Solution

Either remove the BE requirement for applications to have a description,
or enforce it in the FE properly, so that it is no longer possible to create such an error-prone application.

Status

I am looking for feedback. I would like to implement the solution myself but I would like to know if I should adjust the BE or FE.

Checklist

  • Discussed
  • Solved
  • Tested

Add SMTP server configuration fields in the admin frontend

Description:

Currently, we do not have any way to configure an SMTP server in the admin frontend to send email notifications. Therefore, we need to add fields so that the admin can configure a SMTP server.

The following fields need to be added:

  • SMTP server address
  • SMTP port
  • SMTP username
  • SMTP password
  • Use SSL (Yes/No)

The entered information should be stored in the database for later access. Maybe add new collections and store settings-related data.

Steps to implement:

  • Add the fields mentioned above in the admin view.
  • Create a function that validates the inputs and saves the data in the database.
  • Test the feature.

removed oid from service _id

The "oid" field was removed from service id in latest alpha

Check oakestra/oakestra#324 for further details

We need to update components using ApiService and ApiService itself to avoid using _id.oid but rather _id directly.

Auto update dist folder

Let's create a workflow that automatically updates the dist/ folder after each push to develop or main

Add node build to Dockerfile

Add node build support inside the Dockerfile to ship a working FE image that we can use in Oakestra out of the box

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.