Git Product home page Git Product logo

das's Introduction

OpenCog Hyperon - Distributed Atomspace (DAS)

Here, we keep stuff that is pertinent to the DAS project but doesn't belong to any of the other project's specific repositories, such as automated deployment scripts, API and high level documentation, assets used by other repositories' CI/CD scripts, etc.

Documentation

Repositories

In addition to this one, we use many other repositories in the project.

Project management

  • Public board - GitHub project board used to track bug reports, feature requests and major new features planning. Use this board to report bugs or request new features.
  • Development board - Used internally by the DAS Team to track issues and tasks.
  • Contribution guidelines

das's People

Contributors

andre-senna avatar levisingularity avatar pedro-costa-techlabs avatar pedrobc89 avatar

Stargazers

 avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

das's Issues

Separate Tests and Lint into Individual jobs on pipeline

Currently, we have a single job in the pipeline that executes the pre-commit command, running lint, unit tests, coverage, and integration tests. To improve visualization and problem identification, we need to separate each of these actions into different jobs in the pipeline. This will allow for more organized execution and make it easier to detect specific failures at each stage.

Add static analysis to the pipeline of the main repos

  • Query engine
  • Atom DB
  • Functions

Make sure we have a make pre-commit command in all theses repos. This target is supposed to run:

(a) Unit tests
(b) Unit tests coverage check
(b) Integration tests
(c) Static analysis

Each of these checks is supposed to be executed in all repo's pipeline to check every newly changed PR. PRs can be allowed to merge even if some of these checks fail.

Each check should be implemented separately in the pipeline (as 4 different checks) but we should have an action to call all of them at once. In a future task (#52), we'll use this action to run periodic (e.g. daily) checks on our repos, reporting problems in our internal Mattermost channel.

Update User's Guide to compare traversing API with neo4j

Ben's comment in hyperon-development:

In general it might be interesting to comment on how the traversals you describe here compare in function to Neo4j traversals, and how scripting traversals in MeTTa compares to scripting traversals in Cypher

integration test failure

following the README integration test

../das-query-engine$ git branch -vv
* master c108236 [origin/master] Merge pull request singnet/das-query-engine#293 from singnet/angelo/bug/#62/better-msg-about-unsupported-python-version

(hyperon-das-py3.10) .../das-query-engine$ make integration-tests
==================================================== test session starts ====================================================
platform linux -- Python 3.10.12, pytest-7.4.4, pluggy-1.5.0 -- /home/mozi/.cache/pypoetry/virtualenvs/hyperon-das-mpMqY1xH-py3.10/bin/python
cachedir: .pytest_cache
rootdir: /media/mozi/7361de95-66da-41ce-8d3d-d05196c633a1/hyperon/das-query-engine
configfile: pytest.ini
plugins: cov-4.1.0
collected 42 items                                                                                                          
...
...
tests/integration/test_local_das.py::TestLocalDASRedisMongo::test_queries FAILED

========================================================= FAILURES ==========================================================
____________________________________________ TestLocalDASRedisMongo.test_queries ____________________________________________

self = <tests.integration.test_local_das.TestLocalDASRedisMongo object at 0x777266e1b130>

    def test_queries(self):
        _db_up()
        das = DistributedAtomSpace(
            query_engine='local',
            atomdb='redis_mongo',
            mongo_port=mongo_port,
            mongo_username='dbadmin',
            mongo_password='dassecret',
            redis_port=redis_port,
            redis_cluster=False,
            redis_ssl=False,
        )
>       assert das.count_atoms() == {'atom_count': 0}

tests/integration/test_local_das.py:27: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hyperon_das/das.py:440: in count_atoms
    return self.query_engine.count_atoms(parameters)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

self = <hyperon_das.query_engines.local_query_engine.LocalQueryEngine object at 0x777266f0b310>, parameters = None

    def count_atoms(self, parameters: Optional[Dict[str, Any]] = None) -> Dict[str, int]:
        if parameters and parameters.get('context') == 'remote':
            return {}
>       return self.local_backend.count_atoms(parameters)
E       TypeError: RedisMongoDB.count_atoms() takes 1 positional argument but 2 were given

hyperon_das/query_engines/local_query_engine.py:297: TypeError
================================================== short test summary info ==================================================
FAILED tests/integration/test_local_das.py::TestLocalDASRedisMongo::test_queries - TypeError: RedisMongoDB.count_atoms() takes 1 positional argument but 2 were given
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! stopping after 1 failures !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
==================================== 1 failed, 29 passed, 2 skipped in 84.38s (0:01:24) =====================================
make: *** [Makefile:19: integration-tests] Error 1

Review release process

Suggested changes:

  • Update all of the related Dockerfile to use a fixed version of ubuntu, instead of ubuntu-latest

failed unit test

after poetry install on ubuntu 22.04, i get this unit test fail:

~/hyperon/das-query-engine$ make unit-tests
==================================== test session starts =====================================
platform linux -- Python 3.10.12, pytest-7.3.1, pluggy-1.0.0 -- /usr/bin/python3
cachedir: .pytest_cache
rootdir: /media/mozi/7361de95-66da-41ce-8d3d-d05196c633a1/hyperon/das-query-engine
configfile: pytest.ini
plugins: cov-4.0.0, requests-mock-1.11.0
collected 110 items                                                                          
...
...
tests/unit/test_das.py::TestDistributedAtomSpace::test_create_das PASSED
tests/unit/test_das.py::TestDistributedAtomSpace::test_get_incoming_links FAILED

========================================== FAILURES ==========================================
______________________ TestDistributedAtomSpace.test_get_incoming_links ______________________

self = <tests.unit.test_das.TestDistributedAtomSpace object at 0x7230529db340>

    def test_get_incoming_links(self):
>       das = DistributedAtomSpaceMock()

tests/unit/test_das.py:35: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <tests.unit.mock.DistributedAtomSpaceMock object at 0x7230525d1cc0>
query_engine = 'local', kwargs = {}

    def __init__(self, query_engine: Optional[str] = 'local', **kwargs) -> None:
>       self.backend = DatabaseAnimals()
E       TypeError: Can't instantiate abstract class DatabaseAnimals with abstract method get_matched_node_name

tests/unit/mock.py:27: TypeError
================================== short test summary info ===================================
FAILED tests/unit/test_das.py::TestDistributedAtomSpace::test_get_incoming_links - TypeError: Can't instantiate abstract class DatabaseAnimals with abstract method get_matc...
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! stopping after 1 failures !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
================================ 1 failed, 59 passed in 0.36s ================================
make: *** [Makefile:13: unit-tests] Error 1

Add a "title" line in all Mattermost version update reports

The Mattermost messages with version update reports is missing a first line identifying what the update is referred to. The libraries have the line "Library" which kind of cover this but the other updates don't have it. So my suggestion is to add a "title" line in ALL update messages and make explicit what's the actual deliverable being released. The messages would be something like this:

New version of hyperon-das

Deliverable: library in PyPI named hyperon-das
New version available: 0.7.5
Repository: https://github.com/singnet/das-query-engine/releases/tag/0.7.5
Changes:
[#201] Implement fetch() in the DAS API
[#223] Updates in docstrings and logging messages
New version of FaaS gateway

Deliverable: tagged branch (master) in the repository
New version available: 1.12.6
Repository: https://github.com/singnet/das-serverless-functions/releases/tag/1.12.6
Changes:
[#100] Add fetch. New Action

My suggestion for title and "Deliverable" for each repo is:

New version of
    hyperon-das
    hyperon-das-atomdb
    FaaS gateway
    DAS MeTTa Parser
    das-cli
    DAS
Deliverable:
    library in PyPI named hyperon-das
    library in PyPI named hyperon-das-atomdb
    tagged branch (master) in the repository
    tagged branch (master) in the repository
    Debian package named das-cli
    tagged branch (master) in the repository

Implement a basic version of DASNode

EPIC

This is required for singnet/das-query-engine#269

Introduction

The ultimate goal is to pack DAS in a node which allows users to use it as the basic element in a distributed network to solve hard problems. The idea is to use a connected network of DASNodes to split the effort of solving a hard problem. The goal of this task is to implement a basic simplified version of such node with only part of the features required to achieve the ultimate goal.

First, a little recap of the necessary concepts regarding distributed systems.

The fundamental building blocks that distributed systems employ to coordinate and communicate among network nodes are known as distributed primitives. These fundamental activities, or protocols, offer dependable and effective methods for exchanging information, coordinating actions, and managing errors in a distributed setting. A more complete list of such building blocks can be found here: https://en.wikipedia.org/wiki/Distributed_algorithm

Here, we list only the most relevant ones we need to implement DASNode.

  • Leader election
  • Reliable message exchanging
  • Atomic commit operations
  • Mutual exclusion in the use of shared resources
  • Consensus
  • Replication

Leader Election

A distributed system’s leader node is chosen using the leader election protocol to control coordination and decision-making. It’s frequently used in fault-tolerant systems to make sure that only one node is in charge of managing operations and making decisions. A leader election protocol can be used in a distributed system with numerous nodes to guarantee that one node is designated as the major node in charge of coordinating operations. Another node can be chosen as the new leader to take over the coordinating and decision-making duties if the primary node fails.

Reliable message exchanging

The nodes in the distributed system can communicate with each other in a reliable way no matter the geographical distribution of the Nodes (no assumption on connectivity can be taken other than the capability to use the predefined messaging protocol). Messages are guaranteed to be delivered. Messages from node A -> B are guaranteed to be delivered in the same order they have been issued. Messages can be exchanged between nodes deployed in different operational systems and running in different programming languages.

Atomic commit operations

An atomic commit is an operation where a set of distinct changes is applied as a single operation. If the atomic commit succeeds, it means that all the changes have been applied. If there is a failure before the atomic commit can be completed, the "commit" is aborted and no changes will be applied.

Mutual exclusion in the use of shared resources

Mutual exclusion strategies are not just a mean to prevent multiple agents from using a shared resource at the same time. Of course this is important but another very important aspect of such strategies is to make sure every agent have a fair chance to access the shared resources. In distributed systems lingo, it means to make sure that no agent will starve waiting for a resource.

Consensus

Consensus is a procedure that allows a group of nodes in a distributed system to agree on a common decision or understanding even when there are failures. Consensus methods guarantee that the agreed-upon value is trustworthy and consistent and that all nodes concur on it. More precisely, a Consensus protocol must satisfy the four formal properties below.

  • Termination: every correct process decides some value.
  • Validity: if all processes propose the same value V, then every correct process decides V.
  • Integrity: every correct process decides at most one value, and if it decides some value V, then V must have been proposed by some process.
  • Agreement: if a correct process decides V, then every correct process decides V.

Replication

To provide fault tolerance and scalability in a distributed system, replication is a strategy used to replicate data or services across several nodes. Through replication, it is made possible for another node to take over processing duties in the event of a failed node without affecting the system’s overall performance.

Basic implementation of DASNode

In this task we'll aim at implementing only a first simple use case. Basically we'll aim at "Leader election" and "Reliable message exchanging" as described above.

Use Case 1

  • USER: a person using the software we provide to execute JOBs in a network of NODEs.
  • NODE: a server process running on a Docker container encompassing all components required to run a DASNode.
  • JOB: a data structure designed to contain a script in one of the script languages supported by DASNode. This script contains all the code required to execute a given task.
  • MESSAGE: a data structure to encapsulate pieces of information we want to transport from one NODE to another.

Actions to be taken prior to the actual start of Use Case 1

  1. USER starts manually each NODE in the network.
  2. Each NODE is aware of all other NODEs.
  3. All NODEs remain in a latent state waiting for JOB requests.

Actual Use Case 1

  1. USER submit a JOB to any of the NODEs in the network.
  2. NODE start processing the JOB. Eventually, other NODEs are contacted via MESSAGE exchanging to share the burden of processing the JOB.
  3. Once the JOB finishes, results are collected by the same NODE originally contacted by the USER and delivered to the USER.

Non-functional requirements

  1. Although Use Case 1 assumes a full-connected topology, other topologies must be considered when making message exchanging. Algorithms and data structures in DASNode must abstract the actual topology and work properly in any case. Priority is to support all the basic topologies e.g. ring, mesh, torus etc (see a mpore complete list here: https://en.wikipedia.org/wiki/Leader_election) but the goal is to support any arbitrary topology configured by the USER).
  2. JOB should be defined as a script in some programming language. DASNode should be able to support multiple programming languages here so the design must be flexible. Initially we'll support only Python scripts doing queries to a remote DAS Server.
  3. The messaging subsystem doesn't need to support cryptography in this stage but secure message exchanging is a must have in future versions, so the messaging subsystem architecture/design must at least take this into account making sure cryptography is doable without major rework.
  4. NODEs must run inside Docker containers.
  5. There's a TBD baseline JOB test case which will basically perform a single query to the remote DAS Server, process all the results and perform some extra computation on each result in order to evaluate each result's quality. The top N would be then delivered to the USER as the result of the JOB processing. This baseline test case should run with speedup of at least 70% * N considering the execution in a network with N (1 < N < 6) equally resourced NODEs against the execution in a single NODE.
  6. We should have metainfo in NODEs and JOBs describing the capabilities of available hardware resources (NODEs) and a rough estimate of required resources per JOB or parts of the JOB

Review RAM-Only AtomDB

EPIC

Goals:

  • Review algorithms and data structures looking for optimization opportunities. The idea is to prioritize good design, code readability etc but without wasting time/memory unnecessarily.
  • Measure performance (time/memory) against standard MeTTa interpreter spaces

Eventually, other downstream called classes may be refactored as well. Two known issue are

but there may be others.

The test-cases to benchmark can be extracted from MeTTa examples provided in the MeTTa interpreter repo: https://github.com/trueagi-io/hyperon-experimental but it'd be interesting to collect more test MeTTa code from users. There's a channel in Mattermost for MeTTa user group ("Metta Coders" in public World server)

docker-compose build fail

.../das$ git branch -vv
* main 96065d7 [origin/main] Merge pull request #141 from singnet/senna-fixes-5

appears to fail calling gcc to build couchbase wheel:

.../das$ docker-compose up -d
...
...
384.2       Launching build with env: {'PYTHON_PIP_VERSION': '23.0.1', 'HOME': '/root', 'GPG_KEY': 'E3FF2839C048B25C084DEBE9B26995E310250568', 'PYTHON_GET_PIP_URL': 'https://github.com/pypa/get-pip/raw/dbf0c85f76fb6e1ab42aa672ffca6f0a675d9ee4/public/get-pip.py', 'PATH': '/tmp/pip-build-env-rzuk8at4/overlay/bin:/tmp/pip-build-env-rzuk8at4/normal/bin:/usr/local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin', 'LANG': 'C.UTF-8', 'PYTHON_VERSION': '3.9.19', 'PYTHON_SETUPTOOLS_VERSION': '58.1.0', 'PWD': '/app', 'PYTHON_GET_PIP_SHA256': 'dfe9fd5c28dc98b5ac17979a953ea550cec37ae1b47a5116007395bfacff2ab9', 'PLAT': 'linux-x86_64', 'PIP_BUILD_TRACKER': '/tmp/pip-build-tracker-c20putk1', 'PYTHONNOUSERSITE': '1', 'PYTHONPATH': '/tmp/pip-build-env-rzuk8at4/site', 'PEP517_BUILD_BACKEND': 'setuptools.build_meta:__legacy__', 'CXXFLAGS': ' -Wno-strict-prototypes -fPIC -Wuninitialized -Wswitch -Werror -Wno-missing-braces -DPYCBC_LCB_API=0x02FF04 -DVERSION_INFO=\\"3.2.3\\"', 'CFLAGS': ' -Wno-strict-prototypes -fPIC -std=c11 -Wuninitialized -Wswitch -Werror -Wno-missing-braces -DPYCBC_LCB_API=0x02FF04'}, build_args: ['--config', 'Release', '--', '-j2'], cmake_args: ['-DPYCBC_LCB_API=0x02FF04', '-DCMAKE_LIBRARY_OUTPUT_DIRECTORY=/tmp/pip-install-tqyo9fa8/couchbase_d73e3742d88e47b69f403c9d297010fe/build/lib.linux-x86_64-cpython-39/couchbase_core', '-DPYTHON_EXECUTABLE=/usr/local/bin/python', '-DPYTHON_INCLUDE_DIR=/usr/local/include/python3.9', '-DHYBRID_BUILD=TRUE', '-DPYTHON_LIBFILE=/usr/local/lib/libpython3.9.so', '-DPYTHON_LIBDIR=/usr/local/lib', '-DPYTHON_VERSION_EXACT=3.9', '-DCMAKE_BUILD_TYPE=RELEASE', '-DCMAKE_VERBOSE_MAKEFILE:BOOL=ON', '-DPYTHON_EXECUTABLE=/usr/local/bin/python']
384.2       Got platform linux
384.2       self.base is ['build', 'temp.linux-x86_64-cpython-39']
384.2       self.base is ['build', 'temp.linux-x86_64-cpython-39']
384.2       self.base is ['build', 'temp.linux-x86_64-cpython-39']
384.2       self.base is ['build', 'temp.linux-x86_64-cpython-39']
384.2       copying build/temp.linux-x86_64-cpython-39/install/lib/Release/libcouchbase.so.8 to /tmp/pip-install-tqyo9fa8/couchbase_d73e3742d88e47b69f403c9d297010fe/build/lib.linux-x86_64-cpython-39/couchbase_core/libcouchbase.so.8
384.2       success
384.2       self.base is ['build', 'temp.linux-x86_64-cpython-39']
384.2       self.base is ['build', 'temp.linux-x86_64-cpython-39']
384.2       self.base is ['build', 'temp.linux-x86_64-cpython-39']
384.2       self.base is ['build', 'temp.linux-x86_64-cpython-39']
384.2       copying build/temp.linux-x86_64-cpython-39/install/lib/Release/libcouchbase.so.8 to /tmp/pip-install-tqyo9fa8/couchbase_d73e3742d88e47b69f403c9d297010fe/couchbase_core/libcouchbase.so.8
384.2       success
384.2       self.base is ['build', 'temp.linux-x86_64-cpython-39']
384.2       self.base is ['build', 'temp.linux-x86_64-cpython-39']
384.2       error: command '/usr/bin/gcc' failed with exit code 1
384.2       [end of output]
384.2   
384.2   note: This error originates from a subprocess, and is likely not a problem with pip.
384.2   ERROR: Failed building wheel for couchbase
384.2   Building wheel for durationpy (setup.py): started
384.4   Building wheel for durationpy (setup.py): finished with status 'done'
384.4   Created wheel for durationpy: filename=durationpy-0.5-py3-none-any.whl size=2512 sha256=de6df5ff2a12c7ccbf47de730fde6c0a0b8a150e0d04bbc3da3686205cdcdc36
384.4   Stored in directory: /tmp/pip-ephem-wheel-cache-8y__iosy/wheels/bc/16/3f/24b88b08411d6866c581f9c024d049dbb89a8def6287efe448
384.5 Successfully built durationpy
384.5 Failed to build couchbase
384.5 ERROR: Could not build wheels for couchbase, which is required to install pyproject.toml-based projects
393.1 
393.1 [notice] A new release of pip is available: 23.0.1 -> 24.1.1
393.1 [notice] To update, run: pip install --upgrade pip
------
Dockerfile:13
--------------------
  11 |     WORKDIR /app
  12 |     
  13 | >>> RUN pip install --no-cache-dir -r requirements.txt
  14 |     
  15 |     CMD [ "bash" ]
--------------------
ERROR: failed to solve: process "/bin/sh -c pip install --no-cache-dir -r requirements.txt" did not complete successfully: exit code: 1
ERROR: Service 'app' failed to build : Build failed

Update deployment script to avoid redundant confirmation questions

I think when the script come to this point:

-----------------------------------------------------------------------------------
hyperon-das-atomdb from version 0.6.10 to version 0.6.11
-----------------------------------------------------------------------------------
hyperon-das from version 0.7.10 to version 0.7.11
-----------------------------------------------------------------------------------
das-serverless-functions from version 1.12.9 to version 1.12.10
-----------------------------------------------------------------------------------
Do you want to continue and apply these changes? [yes/no] y

The user have already confirmed that he want to proceed with whatever is required to update each of these repos. So asking again for confirmation like this for each repository:

Do you want to continue with the commit? [y/n] y

is redundant. Actually it's also annoying because the release process as a whole takes some time so the user would probably want to start it and get back to it later so making all the confirmations in the beginning is a better pattern than making them all along the process. Following this line, this confirmation should also be asked earlier (and only once, not once per repository):

Do you want to run the integration tests? [y/n] y

Perhaps a good place for it would be just after do you want to continue and apply these changes? [yes/no] y.

DAS API page is broken

When hiting https://singnet.github.io/das-query-engine/api/das/ I see gigantic icons followed by a 404 error.

So we have 2 problems which need to be fixed in this card:

  • (1) 404 page should NOT show those gigantic icons
  • (2) The API documentation is either not being built or is being deployed somewhere else (if that's the case, the link in our main documentation hub (README in das repo) should be fixed.

Issue loading a local DAS

Hi!
I performed the following steps to load small local DAS:

1.Started dbs with:

$ das-cli db start

  1. Loaded the metta file:

$ das-cli metta load /home/saulo/snet/hyperon/github/das-pk/shared_hsa_dmel2metta/data/output/test.metta

  1. Tried to run the script:

from hyperon_das import DistributedAtomSpace

host = '127.0.0.1'
port = '8888'
das = DistributedAtomSpace(query_engine='local', host=host, port=port)

print(f"Connected to DAS at {host}:{port}")
print("(nodes, links) =", das.count_atoms())

  1. tried this way:

$ python3.10 bio2.py
Connected to DAS at 127.0.0.1:8888
(nodes, links) = (0, 0)

Why there are no atoms?
The MeTTa code in the test.metta is below
Thanks!

(allele FBal0100372)
(allele_symbol (allele FBal0100372) Myc[P0])
(gene_id (allele FBal0100372) FBgn0262656)
(taxon_id (allele FBal0100372) 7227)
(allele FBal0304771)
(allele_symbol (allele FBal0304771) Clk[SV40.Tag:V5])
(gene_id (allele FBal0304771) FBgn0023076)
(taxon_id (allele FBal0304771) 7227)

Review AtomDB API to optimize the use of all MongoDB indexes

Currently we have only one method badly named get_matched_node_name. We should design a better set of methods with proper functionalities and method names.

Eventually, we may need to create extra indexes in MongoDB in order to make the proposed queries faster in redis_mongo.

Docs: where are the mentioned components/entities located in the architecture?

Hello!

I created this issue to ask some questions that I had while reading the docs and naturally after understading the system's goal itself.

The functionalities of DAS are clear to me. However, there are still some white spots that I want to clarify.

Regarding Architectural

Is DAS a standalone component? Or is it the system as a whole? I'm wondering that because I'm not sure where the Query-Engine, AtomDB and Cache are standing.

Based on https://github.com/singnet/das/blob/master/docs/das-users-guide.md#starting-a-das-server , am I correct to conclude that there are basically three components which together form the DAS system?

  1. Redis cluster (is it for cache?)
  2. MongoDB cluster (DBMS for persistent store)
  3. OpenFaaS gateway

If that is correct, then things like Query-Engine, Cache, AtomDB... are all operating under OpenFaaS gateway?
Meaning, OpenFaaS when receiving a query will deploy a certain function which contains all these functionalities.

Also... Custom indexes: where are they stored? Within the DBMS or AtomDB?

Cache

Is the cache stored in-memory (RAM) or on a persistent storage (e.g.: Redis)? Or both?

In the following diagram https://github.com/singnet/das/blob/master/docs/das-overview.md#das-server-deployment-and-architecture , it seems the DAS cache is being represented as a function. Does that mean the cache operates with in-memory data as a function, or the diagram's cache is an interface to the where cache data is stored?

Architectural questions summary

I basically want to understand where the following entities/components are located within the architecture:

  • Cache
  • Query-Engine (and any other responsible for handling the queries)
  • AtomDB

Extra (just out of curiosity): regarding implementation

  1. Are atoms/links garbage collected as human memory? Or it's up to the caller to remove atoms/links?

  2. As a client connected to multiple DAS servers, will queries be done in parallel?

  3. Do atoms on different remote DAS servers can have links with each other? If yes,
    how is that synchronized?

Connecting to server error

I've loaded a MeTTa file into a local DAS, but my Python script is not able to connect to the DAS server.
Steps I did:

  1. das-cli db start
  2. das-cli faas start
  3. das-cli metta load my_file.metta

The script is only:

from hyperon_das import DistributedAtomSpace

host = '127.0.0.1'
port = '8080'
das = DistributedAtomSpace(query_engine='remote', host=host, port=port)

And the error message is:

Traceback (most recent call last):
File "/home/saulo/.local/lib/python3.10/site-packages/hyperon_das/decorators.py", line 21, in wrapper
status, response = function(args, *kwargs)
File "/home/saulo/.local/lib/python3.10/site-packages/hyperon_das/utils.py", line 172, in connect_to_server
raise Exception(message)
Exception: Error unpickling objects in peer's response

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/mnt/hdd_2/saulo/snet/rejuve.bio/das/shared_rep/data/bio2.py", line 9, in
das = DistributedAtomSpace(query_engine='remote', host=host, port=port)
File "/home/saulo/.local/lib/python3.10/site-packages/hyperon_das/das.py", line 75, in init
self.set_query_engine(**kwargs)
File "/home/saulo/.local/lib/python3.10/site-packages/hyperon_das/das.py", line 109, in set_query_engine
self.query_engine = RemoteQueryEngine(self.backend, self.system_parameters, kwargs)
File "/home/saulo/.local/lib/python3.10/site-packages/hyperon_das/query_engines/remote_query_engine.py", line 29, in __init
self.remote_das = FunctionsClient(self.host, self.port)
File "/home/saulo/.local/lib/python3.10/site-packages/hyperon_das/client.py", line 24, in init
self.url = connect_to_server(host, port)
File "/home/saulo/.local/lib/python3.10/site-packages/hyperon_das/decorators.py", line 29, in wrapper
raise RetryConnectionError(
hyperon_das.exceptions.RetryConnectionError: ('An error occurs while connecting to the server', "Error unpickling objects in peer's response")

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.