Git Product home page Git Product logo

kitsudaiki / hanami Goto Github PK

View Code? Open in Web Editor NEW
9.0 1.0 0.0 12.13 MB

Hanami contains an experimental artificial neural network, which can work on unnormalized input-data in a cloud native environment.

Home Page: https://docs.hanami-ai.com

License: Apache License 2.0

Shell 0.26% Dockerfile 0.05% C 0.90% C++ 78.70% HTML 5.04% JavaScript 2.47% CSS 0.72% Yacc 0.66% Go 3.00% Cuda 1.57% Jinja 0.13% Python 3.76% CMake 2.48% Earthly 0.26%
ai ai-as-a-service cpp neural-network dashboard energy-efficiency multi-tenants openapi-documentation kubernetes-deployment experimental

hanami's Introduction

Hanami

GitHub release GitHub Platform

Github workflow status CodeQL OpenSSF Scorecard

IMPORTANT: This project is still an experimental prototype and NOT ready for any productive usage.

Intro

Hanami contains in its core a custom concept for neural networks, which are very flexible in their behavior and structure, which is packed in an as-a-Service backend. The backend is completely written from scratch in C++.

Supported Environment

C++ Compiler (Hanami) Python (SDK) Kubernetes (Rollout)
ubuntu-2204_clang-13 python-3_9 kubernetes-1_26
ubuntu-2204_clang-14 python-3_10 kubernetes-1_27
ubuntu-2204_clang-15 python-3_11 kubernetes-1_28
ubuntu-2404_clang-15 python-3_12 kubernetes-1_29
ubuntu-2404_clang-16 kubernetes-1_30
ubuntu-2404_clang-17
ubuntu-2404_clang-18
ubuntu-2204_gcc-10
ubuntu-2204_gcc-11
ubuntu-2204_gcc-12
ubuntu-2404_gcc-12
ubuntu-2404_gcc-13
ubuntu-2404_gcc-14

Initial goal

I started this project without a specific use-case in mind. The only goal was to create a neural network, which is much more dynamic, compared to other classical networks, in regard of learning behavior and structure. It should behave more like the human brain. So it was basically a private research project. But it also shouldn't be only a simple PoC. I also wanted a good performance, so I written it in C++, optimized the data-structure many time and added multi-threading. Mostly it was tested so far with the MNIST-test, because this is the Hello-world of neural networks, but also other little constructed examples.

Despite the missing goal of the concept, this entire project was and still is a very good base for me to improve my developer skills and learn new technologies.

Current state of the project

Like already written above, it is still a prototype. There are still many missing tests, input-validations, comments and so on. This project is currently only written by a single person beside a 40h/week job.

Current prototypically implemented features

  • Growing neural network:

    The neural network, which is the core of the project, growth over time while learning new things by creating new synapses to other nodes, if the input requires this. A resize of the network is also quite linear in complexity.

  • Reduction-Process

    The concept of a growing network has the result, that there is basically nearly no limit in size, even if the growth-rate slows down over time. To limit the growth-rate even more, it is possible to enable a reduction-process, which removes synapses again, which were to inactive to reach the threshold to be marked as persistent.

    See measurement-examples

  • No normalization of input

    The input of the network is not restricted to range of 0.0 - 1.0 . Every value, as long it is a positive value, can be inserted. Also if there is a single broken value in the input-data, which is million times higher, than the rest of the input-values, it has nearly no effect on the rest of the already trained data. Thanks to the reduction-process, all synapses, which are only the result of this single input, are removed again from the network.

  • Spiking neural network

    The concept also supports a special version of working as a spiking neural network. This is optional for a created network and basically has the result, that an input is impaced by an older input, based on the time how long ago this input happened.

    See short explanation and measurement-examples

  • No strict layer structure

    The base of a new neural network is defined by a cluster-template. In these templates the structure of the network in planed in hexagons, indeed of layer. When a node tries to create a new synapse, the location of the target-node depends on the location of the source-node within these hexagons. The target is random and the probability depends on the distance to the source. This way it is possible to break the static layer structure. But when defining a line of hexagons and allow nodes only to connect to the nodes of the next hexagon, a classical layer-structure can still be enforced.

    See short explanation and measurement-examples

  • 3-dimensional networks

    It is basically possible to define 3-dimensional networks. This was only added, because the human brain is also a 3D-object. This feature exist in the cluster-templates, but was never tested until now. Maybe in bigger tests in the future this feature could become useful to better mix information with each other.

  • Parallelism

    The processing structure works also for multiple threads, which can work at the same time on the same network. (GPU-support with cuda is disabled at the moment for various reasons).

  • Usable performance

    The 60.000 training pictures of the MNIST handwritten letters can be trained on CPU in about 4 seconds for the first epoch, without any batch-processing of the input-data and results in an accuracy of 93-94 % after this time.

  • Generated OpenAPI-Documentation

    The OpenAPI-documentation is generated directly from the code. So changing the settings of a single endpoint in the code automatically results in changes of the resulting documentation, to make sure, that code and documentation are in sync.

    See OpenAPI-docu

  • Multi-user and multi-project

    The projects supports multiple user and multiple projects with different roles (member, project-admin and admin) and also managing the access to single api-endpoints via policy-file. Each user can login by username and password and gets an JWT-token to access the user- and project-specific resources.

    See Authentication-docu

  • Efficient resource-usage

    1. The concept of the neural network results in the effect, that only necessary synapses of an active node of the network is processed, based on the input. So if only very few input-nodes get data pushed in, there is less processing-time necessary to process the network.

    2. Because of the multi-user support, multiple networks of multiple users can be processed on the same physical host and share the RAM, CPU-cores and even the GPU, without spliting them via virtual machines or vCPUs.

    3. Capability to regulate the cpu-frequence and measure power-consumption. (disabled currently)

      See Monitoring-docu

  • Network-input

    There are 2-variants, how it interact with the neural networks:

    1. Uploading the dataset and starting an asynchronous task based on this dataset over the API

      See OpenAPI-docu

    2. Directly communicate with the neural network via websocket. In this case not a whole dataset is push through the synapse, but instead only a single network-input is send. The call is blocking, until the network returns the output, which gives more control.

      See Websocket-docu

  • Remote-connection

    1. SDK (Python, Go, Javascript)

      The SDK for the project was implemented in Python, Go and Javascript. Most the the endpoints are covered by all 3 of them. The one with the most features is the Python-version. It is also the only one with the feature of direct communication with the network and is also the one used by the test-script, which tests all important endpoints.

      See SDK-docu

    2. CLI (Go)

      A first basic CLI-client also exist, which supports most the endpoints and is written in Go. In my opinion Go is really good for this, because of its standard library and because the binary are mostly statically linked, which results in less dependencies on the host, where it should be executed. Also I wanted to get some pracitcal experience to learn Go. The client uses the Go-version of the SDK.

      See CLI-docu

    3. Dashboard (HTML + Javascript + CSS)

      As a PoC a first dashboard was created, without any framework. At the moment it is not really maintained, because of a lack of available time and uses the Javascript-version of the SDK. The main motivation for thsi dashboard was to learn basics of web-development and because a visual output simply looks better for showing this project to someone.

      See Dashboard-docu

  • Installation on Kubernetes and with Ansible

    The backend can be basically deployed on kubernetes via Helm-chart or plain via Ansible.

    See Installation-docu

Known disadvantages (so far)

  • Very inefficient for binary input, where inputs of single nodes of the network can only be 0 or 1, without values in between

  • A single synapse needs more memory than in a classical network. The hope is, in bigger tests, it becomes much more efficient compared to fully meshed layered networks.

  • few inputs, but many outputs brings bad results. There ideas, hob to fix this issue, but these are not implemented and tested, if they are a good fix for this issue.

Possible use-case (maybe)

Because the normalization of input is not necessary, together with the good performance of training single inputs (based on the benchmark) and the direct interaction remotely over websockets, could make this project useful for processing measurement-data of sensors of different machines, especially for new sensors, where the exact maximum output-values are unknown. So continuous training of the network right from the beginning would be possible, without collecting a bunch of data at first.

Summary important links

https://docs.hanami-ai.com

If you need help to setup things, have a question or something like this, feel free to contact me by eMail or use the Question-template in the issues.

Issue-Overview

Hanami-Project

This repository

Required packages for development:

sudo apt-get install -y git ssh gcc g++ clang-15 clang-format-15 make cmake bison flex libssl-dev libcrypto++-dev libboost-dev uuid-dev  libsqlite3-dev protobuf-compiler

Clone repo with:

git clone --recurse-submodules https://github.com/kitsudaiki/Hanami.git
cd Hanami

# load pre-commit hook
git config core.hooksPath .git-hooks

In case the repo was cloned without submodules initially:

git submodule init
git submodule update --recursive

Mkdocs and plugins:

pip3 install mkdocs-material mkdocs-swagger-ui-tag mkdocs-drawio-exporter

(to build the documentation Draw.io also has to be installed on the system)

Build

For the build of the docker-images, earthly is used.

Build Hanami

The code can be build as image like this:

earthly +image --image_name=<DOCKER_IMAGE_NAME>

Example:

earthly +image --image_name=hanami:test

Build Hanami-docs

The documenation can be build as image like this:

earthly +docs --image_name=<DOCKER_IMAGE_NAME>

Example:

earthly +docs --image_name=hanami_docs:test

The documentation listen on port 8000 within the docker-container. So the port has to be forwarded into the container:

docker run -p 127.0.0.1:8080:8000 hanami_docs:test

After this within the browser the addess 127.0.0.1:8080 can be entered to call the documenation within the browser.

Contributing

If you want to help the project by contributing things, you can check the Contributing guide.

Author

Tobias Anker

eMail: [email protected]

License

The complete project is under Apache 2 license.

hanami's People

Contributors

dependabot[bot] avatar kitsudaiki avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar

hanami's Issues

[Feature]: Improve output of system-information in dashboard

Feature

Description

The current output of the system-information coming from Hanami is only a plain printed json. This has to be packed in a better graphical structure.

Acceptance Criteria

  1. a good reprรคsentation format for the output was defined
  2. the new format was integrated into the dashboard

Additional Information

Blocked by

How to test

rework data-items

Feature-request

Description

The data-items are at the moment a structure of classes, where each object within the tree is created by a new-call. At least thanks to the memory-leak-tests this is no place for memory-leaks, but it is slow to create and parse in libKitsunemimiJson. It have to be reworked to a structure, where everything of the tree is already on one single byte-buffer located. That way the massive amount of new and delete is not necessary and serialization and deserialization within the project is not necessary and makes the code much faster.

add error-popups to dashboard

Feature-request

Description

For error-messages within modals, there is already a basic error-output. For errors, which appear in actions like requesting lists or measurements when the connections is broken, there is no error-output, only an empty content with error in the console-output at the moment. So to increase the usability, there has to be error-popups for these asynchronous cases.

Implement Izumi and Inori

Update

Description

The last two main-components, which were planned, still has to be implemented and connected to the infrastructure.

  • IzumiShare
  • InoriLink
  • libIzumiShare
  • libInoriLink

Kitsunemimi-Repos, which have to be updated

  • KyoukoMind
  • AzukiHeart
  • MisakiGuard
  • ShioriArchive
  • ToriiGateway
  • TsugumiTester
  • libAzukiHeart
  • libMisakiGuard
  • libShioriArchive
  • HanamiAI-Dashboard
  • HanamiAI-CLI
  • libHanamiAiSdk
  • libKitsunemimiHanamiNetwork

Bind tcp-server

Feature-request

Description

At the moment only the unix-domain socket server are used for internal communication and not the tcp-server. But these only listen 0.0.0.0 they are a bit bad at the moment anyway. For future setups, this should be updated to be able to bind them to specific interfaces.

Create Monorepo

Change

Description

Actual this repository here serves only as central anchor for this project. All sub-components and libraries are only attached as git-submodules. The basic idea at the beginning by keeping all as separate repositories was the re-usability of the libraries in other projects. At this time I still had my automation tool, which had also used these libraries.

But there are problems:

  • There are many more repositories now. With 33 attached repositories it becomes quite hard to handle. Even basic things like tagging all repos or for example fixing the badges in the readme's (#23) take much time. Also keeping all in a consistent stat becomes quite hard.
  • git-submodules can be tricky
  • maybe future projects use Rust instead of C++ and so the libraries in separate repositories is potentially not necessary anymore

Primary problem is the huge amount of maintenance workload for the huge amount of repositories, which will become even more in the future.

So I decided to close all the sub-repositories and merge the source-code of all into this repository here. This should make the workflow more effective. Even if this structure has its own disadvantages.

Add new basic GPU-support

Feature-request

Description

There were already multiple tries in the past to bring the core of Kyouko on a GPU. Main problem was the allocation of new memory and race-conditions. Past tests were already successful to run a network with the MNIST-test-case on a GPU, but the performance was very bad.
So other ways have to be found. Maybe GPU's could be used in a supporting way.

For the processing on GPU OpenCL has to be used for now, to be hardware independent.

rename sakura-items

Update

Description

Rename BlossomLeaf to BlossomIO to be more expicit.

Kitsunemimi-Repos, which have to be updated

  • KyoukoMind
  • AzukiHeart
  • MisakiGuard
  • ShioriArchive
  • libAzukiHeart
  • libMisakiGuard
  • libKitsunemimiHanamiNetwork
  • libKitsunemimiSakuraLang

use dropdown-menu in dashboard tables

Feature-request

Description

Actions in the single rows of tables are single buttons at the moment. This use much space and doesn't scale very well. So a dropdown-menu should be used, where all buttons are collected.

make `name`-db-field mandatory

Update

Description

Each generic database-table should have the name-column mandatory beside the uuid and so on.

Kitsunemimi-Repos, which have to be updated

  • KyoukoMind
  • MisakiGuard
  • SagiriArchive
  • libKitsunemimiHanamiDatabase

rename libKitsunemimiHanamiMessaging

Update

Description

Rename libKitsunemimiHanamiMessaging to libKitsunemimiHanamiNetwork

Kitsunemimi-Repos, which have to be updated

  • KyoukoMind
  • AzukiHeart
  • MisakiGuard
  • SagiriArchive
  • ToriiGateway
  • libAzukiHeart
  • libMisakiGuard
  • libSagiriArchive
  • libKitsunemimiHanamiMessaging

compatibility problem with openssl 3

BUG-issue

Description

With openssl version 3, which is installed with ubuntu 22.04, there are compiler-warnings like this:

 ../../../libKitsunemimiCrypto/src/signing.cpp:35:33: warning: 'HMAC_CTX* HMAC_CTX_new()' is deprecated: Since OpenSSL 3.0 [-Wdeprecated-declarations]
   35 |     HMAC_CTX* ctx = HMAC_CTX_new();
      |                     ~~~~~~~~~~~~^~
In file included from ../../../libKitsunemimiCrypto/include/libKitsunemimiCrypto/signing.h:14,
                 from ../../../libKitsunemimiCrypto/src/signing.cpp:9:
/usr/include/openssl/hmac.h:33:33: note: declared here
   33 | OSSL_DEPRECATEDIN_3_0 HMAC_CTX *HMAC_CTX_new(void);
      |                                 ^~~~~~~~~~~~

The binary still build, but this has to be fixed.

[Feature]: Add error-states to task-queue

Feature

Description

When a task in the backend failed, there is no response for the user. The task should result in an error-state with a message of what went wrong.

Acceptance Criteria

  1. a new error-state for tasks was added
  2. if a task has an error, it end in the new error-state
  3. the error-message is returned to the user

Additional Information

Blocked by

How to test

[Feature]: Add timestamps to database

Feature

Description

Add time-stamps as mandatory field to the tables in the database, which have the UTC-timestamp when the entry was created.

Acceptance Criteria

  1. timestamps were added to all database-entries

Additional Information

Blocked by

How to test

generate OpenAPI docu

Feature-request

Description

The generating of the REST-API documentation should also support OpenAPI-format as output.

Thread-limitation while building

Update

Description

The libraries, which contain a parser-generator, can not be build with multiple threads in parallel, because it results in compiling errors. The only solution, which was found so far and is used in the build-scripts of the single repositories, is to limit the number of threads in the build-process only for this repositories to 1. For this new central pro-file in this repository, this solution doesn't work so well. For now only a global limit to 1 worked. This slows down all repositories, which doesn't need this workaround, which makes the build-process unnecessary slow.
A solution has to be found to fix this performance issue.

[Feature]: count and handle failed logins

Feature

Description

To avoid brute-force-attacks against a user-account, there should be a counter for failed login-tries. After 3 failed login-tries there should be a forced wait time, which can be configured via config-file. Only after this timespan there should be login-try possible again. This should prevent brute-force attacks against a user-account.

Acceptance Criteria

  1. a counter was added for failed logins
  2. a new config-entry was added to define the allowed number of failed logins

Additional Information

Blocked by

How to test

[Feature]: make table-entries in dashboard clickable

Feature

Description

Entries in tables should be clickable to show additional information of the resource in a modal.

Acceptance Criteria

  1. entries in the tables within the dashboard can be clicked
  2. When clicking on an entry, a new modal opens with more information about the selected object

Additional Information

Blocked by

How to test

fix github worflow badge

Update

Description

There was an upstream-change, which breaks the github workflow badge in the readme-files:

badges/shields#8671

Has to be fixed to this (example for libKitsunemimiCommon):

![Github workfloat status](https://img.shields.io/github/actions/workflow/status/kitsudaiki/libKitsunemimiCommon/build_test.yml?branch=develop&style=flat-square&label=build%20and%20test)

Kitsunemimi-Repos, which have to be updated

  • libHanamiAiSdk
  • libKitsunemimiHanamiNetwork
  • libKitsunemimiHanamiPolicies
  • libKitsunemimiHanamiDatabase
  • libKitsunemimiHanamiSegmentParser
  • libKitsunemimiHanamiClusterParser
  • libKitsunemimiHanamiCommon
  • libKitsunemimiSakuraNetwork
  • libKitsunemimiSakuraDatabase
  • libKitsunemimiSakuraHardware
  • libKitsunemimiSqlite
  • libKitsunemimiCpu
  • libKitsunemimiObj
  • libKitsunemimiOpencl
  • libKitsunemimiConfig
  • libKitsunemimiArgs
  • libKitsunemimiNetwork
  • libKitsunemimiIni
  • libKitsunemimiJwt
  • libKitsunemimiCrypto
  • libKitsunemimiJson
  • libKitsunemimiCommon

add submodules of all repos

Update

Description

This Hanami-AI meta-repository should contain all repositories in context of this project as submodule, to have a direct like and the combined packages to check out all necessary repositories at once.

review-api-endpoints

Update

Description

Review and update/fix api-endpoints, like description, reqex and so on

Kitsunemimi-Repos, which have to be updated

  • KyoukoMind
  • AzukiHeart
  • MisakiGuard
  • ShioriArchive

use Github-Actions

Update

Description

use Github-Actions as CI-pipeline

Kitsunemimi-Repos, which have to be updated

  • KyoukoMind
  • AzukiHeart
  • MisakiGuard
  • SagiriArchive
  • ToriiGateway
  • TsugumiTester
  • libAzukiHeart
  • libMisakiGuard
  • libSagiriArchive
  • libKitsumiAiSdk
  • libKitsunemimiHanamiMessaging
  • libKitsunemimiHanamiPolicies
  • libKitsunemimiHanamiDatabase
  • libKitsunemimiHanamiEndpoints
  • libKitsunemimiHanamiCommon
  • libKitsunemimiSakuraNetwork
  • libKitsunemimiSakuraLang
  • libKitsunemimiSakuraDatabase
  • libKitsunemimiSakuraHardware
  • libKitsunemimiSqlite
  • libKitsunemimiCpu
  • libKitsunemimiStorage
  • libKitsunemimiInterface
  • libKitsunemimiObj
  • libKitsunemimiConfig
  • libKitsunemimiArgs
  • libKitsunemimiNetwork
  • libKitsunemimiJinja2
  • libKitsunemimiIni
  • libKitsunemimiJwt
  • libKitsunemimiCrypto
  • libKitsunemimiJson
  • libKitsunemimiCommon

remove sub-namespaces in generic libraries

Update

Description

At the moment the generic library have additional library specific namespaces. For example libKitsunemimiJson has

namespace Kitsunemimi
{
namespace Json
{
...
}
}

this should be changed to only

namespace Kitsunemimi
{
...
}

Because in internal objects are uniquely named enough and it only blows up the syntax in the upper layers.

Kitsunemimi-Repos, which have to be updated

  • libKitsunemimiSqlite
  • libKitsunemimiCpu
  • libKitsunemimiObj
  • libKitsunemimiOpencl
  • libKitsunemimiConfig
  • libKitsunemimiArgs
  • libKitsunemimiNetwork
  • libKitsunemimiIni
  • libKitsunemimiJwt
  • libKitsunemimiCrypto
  • libKitsunemimiJson

update build-script for better branch-handling

Update

Description

The actual build-scripts of the repository only rolling out the list of defined tags/branches of the listed repositories. This has two problems:

  1. feature-branches also have to be defined explicitly, which makes features with multiple repositories more complicate to handle
  2. between develop and stable branches, the list has to be replaced every time, when tagging

Desired behavior:

  1. version, tag, hotfix and rolling branches should only use the list of defined branches
  2. develop-branches should use only develop branches as well, despite the predefined list
  3. feature-branches should try to use the feature-branch with the same name for all dependencies too and if not exist, should use the develop-branch

With this for developing new features, which affects multiple branches, there is no change in the build-script necessary anymore

Kitsunemimi-Repos, which have to be updated

  • KyoukoMind
  • AzukiHeart
  • MisakiGuard
  • ShioriArchive
  • ToriiGateway
  • TsugumiTester
  • libAzukiHeart
  • libMisakiGuard
  • libShiroiArchive
  • libHanamiAiSdk
  • libKitsunemimiHanamiNetwork
  • libKitsunemimiHanamiPolicies
  • libKitsunemimiHanamiDatabase
  • libKitsunemimiHanamiCommon
  • libKitsunemimiSakuraNetwork
  • libKitsunemimiSakuraDatabase
  • libKitsunemimiSakuraHardware
  • libKitsunemimiSqlite
  • libKitsunemimiCpu
  • libKitsunemimiObj
  • libKitsunemimiOpencl
  • libKitsunemimiConfig
  • libKitsunemimiArgs
  • libKitsunemimiNetwork
  • libKitsunemimiIni
  • libKitsunemimiJwt
  • libKitsunemimiCrypto
  • libKitsunemimiJson
  • libKitsunemimiCommon

use websockets for measurements

Feature-request

Description

The measurements for the power consumption and the thermal production of the cpu are requested over the REST-API at the moment. To increase performance, the values should be pushed over the websockets. With this it would also be possible for live-updates of the data. For this there also is a redraw of the d3-output necessary.

Persist measurements

Feature-request

Description

The measurements of temperature, speed and power-consumption by Azuki should be persisted in order to no get lost when restarting the programm.

[Feature]: Change visibility of resources

Feature

Description

At the moment all resources like data-sets are private, so only the user who owns it, the project-admin and global admin can see this. In order to change this, there are setter necessary for each resource to be able to change the visibility.

Acceptance Criteria

  1. each resource got a new update of the endpoints to change the visibilty of a resource
  2. OpenAPI-documenation was updated
  3. SDK-API-Tester was updated

Additional Information

Blocked by

How to test

creae multiple users with different projects and permissions and look, what they can see from other useres depending of the selected visibilty more of the resources

[Feature]: Rework internal handling of temporary files

Feature

Description

The temporary files, for example when uploading data-sets, are a very primitive implementation. The following work-packages are necessary to improve the files.

Acceptance Criteria

  1. temporary files are user- and project-scoped
  2. will be automatically removed after a certain amound of time after the upload broke
  3. upload can be continued after the connection broke

Additional Information

Blocked by

How to test

use only protobuffers

Update

Description

Because for more flexibility and consistent, only protobuffers should be used and not a mix. This should also make the code easier to maintain.

Kitsunemimi-Repos, which have to be updated

  • KyoukoMind
  • AzukiHeart
  • MisakiGuard
  • ShioriArchive
  • ToriiGateway
  • libAzukiHeart
  • libMisakiGuard
  • libShiroiArchive
  • libHanamiAiSdk
  • libKitsunemimiHanamiMessages
  • libKitsunemimiHanamiNetwork

remove libKitsunemimiSakuraLang from project

Update

Description

The library libKitsunemimiSakuraLang should be removed from the project. It contains a script-language, which I created this lib for my automation-tool SakuraTree. The primary feature was easily defining parallel tasks for a faster deploy-process.
I wanted to use the library again for the internal API of the Hanami-AI, but it changed over time and now only some minor parts are still used for the validation of input- and output-variables of internal APIs. The rest is not needed anymore and it doesn't seems, like this will change.
So even that means, that a huge amount of work becomes deprecated and lost, I decided to delete the library from the project, because I don't like dead and unmaintained code. The only parts, which are used, should be moved to libKitsunemimiHanamiNetwork, which use these parts at the moment.
Removing the library also means, that the jinja2-parser libKitsunemimiJinja2 becomes deprecated too, because it is not used anywhere else.

Kitsunemimi-Repos, which have to be updated

  • KyoukoMind
  • AzukiHeart
  • MisakiGuard
  • ShioriArchive
  • ToriiGateway
  • libAzukiHeart
  • libMisakiGuard
  • libShiroiArchive
  • libKitsunemimiHanamiNetwork
  • libKitsunemimiHanamiEndpoints
  • libKitsunemimiHanamiCommon
  • libKitsunemimiSakuraLang
  • libKitsunemimiJinja2

make REST-API-docu in dashboard configurable

Feature-request

Description

At the moment there is only one button to download only the pdf-version of the REST-API documentation. A modal should be added with options to change the output-format.

Genrate config-documentation

Update

Description

Similar the the generation of the REST-API-docu the documentation of the config-files should be automatic too. This doesn't has to be done over a REST-API-endpoint

Generate database-documentation

Update

Description

Similar the the generation of the REST-API-docu the documentation of the database-schemas should be automatic too. This doesn't has to be done over a REST-API-endpoint

use new branch structure

Update

Description

Changes:

  • create develop-branch
  • create rolling-branch (if required)
  • delete master-branch

Kitsunemimi-Repos, which have to be updated

  • KyoukoMind
  • AzukiHeart
  • MisakiGuard
  • SagiriArchive
  • IzumiShare
  • InoriLink
  • ToriiGateway
  • TsugumiTester
  • libKyoukoMind
  • libAzukiHeart
  • libMisakiGuard
  • libSagiriArchive
  • libIzumiShare
  • libInoriLink
  • KitsumiAI-Dashboard
  • KitsumiAI-CLI
  • libKitsumiAiSdk
  • libKitsunemimiHanamiMessaging
  • libKitsunemimiHanamiPolicies
  • libKitsunemimiHanamiDatabase
  • libKitsunemimiHanamiEndpoints
  • libKitsunemimiHanamiCommon
  • libKitsunemimiSakuraNetwork
  • libKitsunemimiSakuraLang
  • libKitsunemimiSakuraDatabase
  • libKitsunemimiSakuraHardware
  • libKitsunemimiSqlite
  • libKitsunemimiCpu
  • libKitsunemimiStorage
  • libKitsunemimiInterface
  • libKitsunemimiObj
  • libKitsunemimiOpencl
  • libKitsunemimiConfig
  • libKitsunemimiArgs
  • libKitsunemimiNetwork
  • libKitsunemimiJinja2
  • libKitsunemimiIni
  • libKitsunemimiJwt
  • libKitsunemimiCrypto
  • libKitsunemimiJson
  • libKitsunemimiCommon

Add logout-function

Feature-request

Description

In the Dashboard there is already a logout-function. This only removes all cookies inclusive the auth-token. To make this process complete, the token should be blocked by the backend after the logout, as long as the token is not expired.

set branch and tag rules

Update

Description

Set rules for develop and v* branches and v*-tags

Kitsunemimi-Repos, which have to be updated

  • all repositories

big-endian and little-endian architecture

Update

Description

The actual network-stack doesn't consider big-endian and little-endian architecture, which will result in crashed for example when the sdk-lib runs on another computer with different architecture or later when components will run on different nodes, which has potentially different architecture.

Change origin from private gitlab to github

Update

Description

Change origin from private gitlab to github

Kitsunemimi-Repos, which have to be updated

  • KyoukoMind
  • AzukiHeart
  • MisakiGuard
  • SagiriArchive
  • IzumiShare
  • InoriLink
  • ToriiGateway
  • TsugumiTester
  • libKyoukoMind
  • libAzukiHeart
  • libMisakiGuard
  • libSagiriArchive
  • libIzumiShare
  • libInoriLink
  • libToriiGateway
  • KitsumiAI-Dashboard
  • KitsumiAI-CLI
  • libKitsumiAiSdk
  • libKitsunemimiHanamiMessaging
  • libKitsunemimiHanamiPolicies
  • libKitsunemimiHanamiDatabase
  • libKitsunemimiHanamiEndpoints
  • libKitsunemimiHanamiCommon

Change workflow

Update

Description

The actual workflow with repositories is not optimal, especially in regards to make central automatic tests over all repositories with API-tests in the near future. As preparation, this repository should not only be a link for generic issues and other organizational stuff, but should become the central point of this project.

Memory-leak when CLOSING websocket-client in sdk-lib

BUG-issue

Description

see https://github.com/kitsudaiki/Hanami-AI/blob/develop/src/sdk/cpp/libHanamiAiSdk/src/common/websocket_client.cpp#L40

Even the m_websocket is initialized with new, a delete on the variable crash the program. In all examples for the websocket of the boost-library, they initialize it on the stack instead of the heap. For cleaning they make some dirty internal stuff, which failes when initializing it with new and try to delete at the end.

There have to be found a solution, how to keep the websocket open and cleanup without memory-leak.

Because there are only a few websockets expected in the life of a client, this seem not too critical at the moment.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.