Git Product home page Git Product logo

vaticle / typedb-driver Goto Github PK

View Code? Open in Web Editor NEW
30.0 10.0 32.0 22.97 MB

TypeDB Drivers for Rust, Python, Java, Node.js, C, C++, and C#.

Home Page: https://typedb.com

License: Apache License 2.0

Java 14.47% Gherkin 0.05% Starlark 12.52% Shell 0.34% Batchfile 0.51% Rust 19.95% SWIG 0.28% TypeScript 11.97% JavaScript 0.96% Python 10.71% Kotlin 2.27% C 0.47% C++ 11.82% CMake 0.06% C# 13.61%
java typedb typeql typedb-client c cpp nodejs python rust typedb-driver

typedb-driver's Introduction

TypeDB Drivers

Factory Discord Discussion Forum Stack Overflow Stack Overflow Hosted By: Cloudsmith

This repository stores all TypeDB Drivers built by Vaticle.

See the table below for links to README files, documentation, and source code.

Driver Readme Documentation Driver location
Rust README Documentation rust/
Python README Documentation python/
Node.js README Documentation nodejs/
Java README Documentation java/
C README See C++ c/
C++ README Documentation cpp/
C# README (Coming soon!) csharp/

Package hosting

Package repository hosting is graciously provided by Cloudsmith. Cloudsmith is the only fully hosted, cloud-native, universal package management solution, that enables your organization to create, store and share packages in any format, to any place, with total confidence.

typedb-driver's People

Contributors

2xyo avatar adammichaelwood avatar alexjpwalker avatar dependabot[bot] avatar dmikhalin avatar dmitrii-ubskii avatar farost avatar flyingsilverfin avatar grabl avatar haikalpribadi avatar izmalk avatar james-whiteside avatar jamesreprise avatar jmsfltchr avatar kasper-piskorski avatar krishnangovindraj avatar lolski avatar lriuui0x0 avatar lukas-slezevicius avatar shiladitya-mukherjee avatar vaticle-bot avatar vmax avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

typedb-driver's Issues

Test cases for TypeDBOptions

Problem to Solve

Since moving to a reliance on BDD, TypeDBOptions are largely untested in 2.0.

Current Workaround

Manual testing / integration testing

Proposed Solution

Create BDD scenarios and implement them in all clients

Additional information

Some options are only relevant to queries over RPC (eg batch size) so it doesn't really make sense to include test runners for these options in the Grakn server itself.

Also, some options were tested in client-python in 1.8, and these test scenarios can be salvaged from test_grakn.py in the client-python repo.

Tests may fail in CI due to server ports clashing

Problem to Solve

Our TypeQL and Concept tests may fail in CI due to server ports clashing. This has become substantially more common since adding Cluster tests. The problem is we generate a random port number between 40000 and 60000 for each test. The chance of a clash in a large test suite is significant.

Environment

  1. OS (where Grakn server runs): Ubuntu 20.04
  2. Grakn version (and platform): Grakn 2.0
  3. Other environment details: Grabl

Current Workaround

Rerun the failing job.

Proposed Solution

There are a couple of approaches we could take here, and none of them are going to fun in Bash - we should try to implement one (or more) of the following in Kotlin:

1. Read the port number from an environment variable named TYPEDB_PORT

If it doesn't exist, initialise it to 40000. If it exists, increment it by 2 and use the result (2 because Cluster uses two ports).

This method gives me concerns because reading/writing environment variables is not atomic. We could easily run into issues.

2. Read the port number from a file named typedb_port.txt

Using a similar strategy to (1). But I think this is also non-atomic.

3. If port in use, try different port

We could simply try changing the port if starting TypeDB fails because the port is in use already. However, I am concerned that the tests may actually just go ahead and run, since they use lsof to detect when TypeDB has started, and in the case of a port conflict, TypeDB will appear to have started, causing issues.

4. Manually specify a port for each test in BUILD file

This is undoubtedly the ugliest solution of all, and prone to human error, but does at least eliminate random errors without fail.

Delete BatchExecutorClient and replace any usage with GraknCLient

  1. We need to make ai.grakn.test.graql.reasoner.BenchmarkIT use ai.grakn.client.Grakn instead of BatchExecutorClient and embedded transactions.

  2. We just need to delete the -b/--batch feature from grakn console, since we already have -f/--file for importing files which uses the new ai.grakn.client.Grakn.

  3. Delete ai.grakn.batch.*

Implement TypeDBRunner in all clients, extracting the common logic into Kotlin/Bash scripts

Problem to Solve

Currently each of our 3 clients has a wildly different BDD infrastructure setup:

Client Java

  • uses Java class as entry point to each test, linked to Bazel target that passes location of TypeDB Core distro as an argument to the test runner, which TypeDBCoreRunner has access to;
  • uses Cucumber's SetUp & TearDown hooks to unzip, run & stop TypeDB via TypeDBCoreRunnerAPI

Client NodeJS

  • uses shell script to unzip, run TypeDB, compile client, compile test steps, run tests, and stop TypeDB

Client Python

  • uses Bazel rule containing shell commands (copied from client-nodejs) to unzip + run TypeDB, run tests and stop TypeDB

In all clients, the client code and steps are passed in as Bazel dependencies; in client-nodejs, they are both generated on-the-fly using genrules.

Proposed Solution

See Ganesh's message below.

Incorrect casting exception

Description

Trying to retrieve subtypes of meta types throws with an incorrect cast exception.

Environment

Client-java Master (1.7.2) tests

Reproducible Steps

Create this test somewhere in client-java

    @Test
    public void subtypes() {
        GraknClient.Session session = client.session("subs");
        GraknClient.Transaction tx = session.transaction().write();
        tx.execute(Graql.parse(" define name sub attribute, value string;").asDefine());
        tx.commit();
        tx = session.transaction().write();
        List<? extends SchemaConcept.Remote<?>> subs = tx.getSchemaConcept(Label.of("thing")).subs().collect(Collectors.toList());
        System.out.println(subs);
    }

Expected Output

Should return the subtypes of thing: thing and name

Actual Output

The concept [grakn.client.concept.type.impl.AttributeTypeImpl.Remote{tx=grakn.client.GraknClient$Transaction@3d1848cc, id=V4272}] is not of type [interface grakn.client.concept.type.MetaType$Remote]
grakn.client.concept.GraknConceptException: 
	at grakn.client.concept.GraknConceptException.create(GraknConceptException.java:35)
	at grakn.client.concept.GraknConceptException.invalidCasting(GraknConceptException.java:42)
	at grakn.client.concept.Concept$Remote.asMetaType(Concept.java:559)
	at grakn.client.concept.type.impl.MetaTypeImpl$Remote.asCurrentBaseType(MetaTypeImpl.java:64)
	at grakn.client.concept.type.impl.MetaTypeImpl$Remote.asCurrentBaseType(MetaTypeImpl.java:45)
	at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193)
	at java.util.Iterator.forEachRemaining(Iterator.java:116)
	at java.util.Spliterators$IteratorSpliterator.forEachRemaining(Spliterators.java:1801)
	at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481)
	at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471)
	at java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:708)
	at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
	at java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:499)
	at grakn.client.test.integration.answer.AnswerIT.subtypes(AnswerIT.java:72)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
	at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
	at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
	at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
	at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
	at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
	at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
	at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
	at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
	at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
	at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
	at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
	at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
	at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
	at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
	at com.google.testing.junit.runner.internal.junit4.CancellableRequestFactory$CancellableRunner.run(CancellableRequestFactory.java:89)
	at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
	at org.junit.runner.JUnitCore.run(JUnitCore.java:115)
	at com.google.testing.junit.runner.junit4.JUnit4Runner.run(JUnit4Runner.java:112)
	at com.google.testing.junit.runner.BazelTestRunner.runTestsInSuite(BazelTestRunner.java:159)

Additional information

Client should not connect to an old primary replica

Description

Under network partitioning scenario, it is possible for a cluster to have two primary replicas. Primary replicas are assigned a term number, therefore it is possible to tell which replica is the newer one out of the two.

However, currently Client Java ignores the term number, and therefore it may first connect to the newer one only then to failover to the older one.

It is important to say that this scenario won't compromise data correctness since the Raft protocol guards against it. However, it'll be more efficient if the term information is used such that the failover always happens from the old to the new primary replica.

Environment

  1. OS (where Grakn server runs): Any
  2. Grakn version (and platform): 2.0.0-alpha
  3. Other environment details: N/A

Reproducible Steps

Steps to create the smallest reproducible scenario:

  1. Create a cluster of three nodes and wait for a primary replica to be selected by the cluster members
  2. Disconnect the network from the primary replica to the other nodes, and wait for the rest of the cluster members to elect a new replica
  3. Connect to the new primary using Client Java, by specifying the IP of the new primary replica the secondary replica of which will also be successful
  4. Then, connect to the old primary using Client Java by specifying the IP directly, which will be successful

Expected Output

Step #4 should fail. Given that Client Java has already seen the new primary, it should reject the attempt to connect to the old one.

Actual Output

Step #4 succeeded.

Allow user to check if session is still alive on the server

Problem to Solve

The test scenario:

Scenario: delete a database causes open sessions to fail
    When connection create database: grakn
    When connection open session for database: grakn
    When connection delete database: grakn
    Then connection does not have database: grakn
    Then session open transaction of type; throws exception: write

must be ignored for now, because it triggers an exception in @After where we attempt to close all sessions and an exception is thrown because the session opened in step 2 no longer exists on the server.

We could fix it by checking if each session is alive remotely before closing them. While not necessarily being an efficient solution, it would be a solution that is guaranteed to not throw.

We already have a mechanism for checking if sessions are alive remotely: the pulse mechanism. We can leverage this to provide a simple public method that returns a boolean corresponding to whether the session is alive remotely or not.

Method to retrieve the attributes of a type is misnamed

Description

On the server Type has method .has() to retrieve the AttributeTypes it owns. For Thing instances, the equivalent method is named .attributes(), to retrieve the attribute instances it owns. However in client-java (and presumably other clients) both methods are named .attributes().

Improve concurrency fixes in RPCTransceiver

Description

The RPCTransceiver acts as an abstraction around our async request-response style of communication over GRPC streams.

We previously found an issue that caused the ResponseListener to hang forever due to a race condition, but we did not correctly identify the cause and introduced synchronized to the methods on the listener instead, which fixed the issue but did not solve any underlying cause. The synchronized methods should be unnecessary because the documentation states that implementations of StreamObserver do not need to support concurrent access, they simply need to be thread-compatible; it is up to the caller to ensure access is synchronised if called from multiple threads.

We have now identified that the real cause of the issue was an earlier attempt to fix another bug, where the transceiver would appear to remain open after calling the close method (making the transaction appear open). This was "incorrectly" fixed by calling onCompleted on the ResponseListener to simulate the connection being correctly closed immediately; this call happens on the client thread, not the GRPC thread, breaking the thread safety restrictions of ResponseListener. The actual hang was caused by the client thread believing that there were response handlers waiting for responses at the time of close, but then having them be missing when performing an actual fetch of the response queue.

The more correct fix is for transaction closing to be independent of transceiver closing, so that the transceiver can remain open for more responses after the transaction close request has been sent. It is conceivable that some valid results should still be processed before the transaction has completed being closed, so the transceiver should not eagerly close.

The high level goal of this refactor should be to remove the synchronized keywords from ResponseListener without causing further concurrency issues.

Change the API to disallow reading from secondary replica when using TypeDB Core

The notion of performing read from secondary replicas in TypeDB Core does not make sense since there's only one server.
However, the user can mistakenly do so since the API allows it.

Currently there's a runtime check that would throw an exception if the user does it by mistake, but ideally it should be done by at the type-system level.

One way is to make Transaction.Type strict by splitting it into Transaction.Type.Core and Transaction.Type.Cluster. We'll have to convert the enum to a class since you can't do inheritance with enums.

class Type {
  class Core {
    public static int READ;
    public static int WRITE;
  }
  class Cluster extends Core {
    public static int READ_SECONDARY;
  }
}

2.0: Add overridden roles to concept API.

Grakn 2.0 adds role overriding to the concept API, which is not currently supported in client-java. This causes some verification steps to be unsatisfiable. This may also require protocol changes: see how the changes were made for attribute overriding for more details.

BDD tests for transaction commit/close

Problem to Solve

Currently, the BDD steps have no tests for commit behavior and checking the behavior of closing multiple times; committing multiple times, etc. These should be implemented to ensure that transactions behave as expected.

Eg, after we close a transaction, we cannot commit. Also after a commit(), we can close as many times as we like (idempotent). After a commit(), we get an error after another attempted commit.

client-java alpha-5: cannot get client!

Description

in docs to get a client:

Grakn.Client client = new GraknClient("localhost:1729");

error:

error: constructor GraknClient in class GraknClient cannot be applied to given types;
        Grakn.Client client = new GraknClient(this.graknURI);
                              ^
  required: no arguments
  found: String
  reason: actual and formal argument lists differ in length

if you give it nothing:

Grakn.Client client = new GraknClient();

error is:

        Grakn.Client client = new GraknClient();

Environment

  1. Mac OS 10
  2. grakn: alpha-2, client-java: alpha-5

Reproducible Steps

see above

Add trace logging to RPC Transaction and Iterator

Problem to Solve

When bugs occur in the communication protocol they can be difficult to debug.

Current Workaround

We can debug and follow through everything that happens, although this tends to be quite difficult.

Proposed Solution

At the trace level, log the relevant RPC message whenever we issue a request or receive a response. Log a simple message whenever we await a response.

Performance refactor of Answer and Concept API

Problem to Solve

Currently, the Concept API is the main point of data transfer to the Grakn user. A Graql query is executed (or streamed), which results in a number of Concepts being returned (usually a list of ConceptMaps). These concepts are very simply Grakn IDs which are then used by synchronous Concept API RPCs to retrieve data and other associated concepts.

This means that all data returned from a query is actually fetched as separate concept API RPC calls, which results in a large amount of bidirectional communication and waiting. Any amount of latency introduced by the network is multiplied by the number of these RPCs, which can often be a multiple of the number of concepts retrieved by a single query.

As an example, if network latency is 10ms and 100 concepts are fetched, with 4 RPC calls per concept, 4 seconds of total latency is introduced.

It is clear that this can pose a performance issue to users who intend to build low-latency applications on Grakn.

Current Workaround

There is currently no good workaround, other than to minimise the number of Concept API calls made (it is impossible to do asynchronous calls on a transaction currently).

A short-term fix was proposed which would simply pre-fetch common Concept API answers, but then the boundaries between RPC-driven communication and results parsing would be obfuscated. This solution would alleviate the issues but we've decided on a wider-spread refactor to deal with the issue.

Proposed Solution

Instead of the full Concept API being the single point of contact for Answers, the Answer become "Value-only" Concepts, as in they only expose the methods to retrieve the pre-fetched results and don't expose anything further. This makes the separation between what exists in results (and therefore doesn't require any network overhead to use) and what must be fetched or performed via RPC explicit to the user.

The way I am planning to achieve this is to simplify the current Concept interfaces down to a simple set of operations that should superset the 99% use case for queries. This will include attribute values and schema type information that will also be embedded in the answers protocol. When a user wants to use the full Concept API instead, it must construct the "live" concept from the retrieved concepts using a method and passing a transaction. For example:

concept.withTransaction(tx).sups()

This requires an API change that will break existing users of Grakn client, so we will also take the opportunity to rework the client API into an interface with a static factory method.

Summary of required work

  • Update protocol [x], server [ ] and client [ ] to allow retrieval of multiple concepts from a single RPC.
  • Update protocol to include basic concepts to be used in "Answer" API.
  • Update server to transmit all basic concepts for the "Answer" API.
  • Update client-java with separated Concept API from "Answer" API, allowing the creation of a "live" Concept from a result concept.
  • Refactor client-java API whilst we are here to avoid needing to do it again in future.

First Session always takes over 5 seconds to open on Mac OS

Description

The first call to client.session("keyspace") on an OSX system takes almost exactly 5 seconds to open each time, which is particularly noticeable when opening the grakn console.

Environment

  1. Mac OS Sierra+
  2. Grakn version (and platform): 1.6.2
  3. Other environment details:

Reproducible Steps

Steps to create the smallest reproducible scenario:

  1. Start a local grakn server on your MacBook Pro.
  2. Open the grakn console or any app that opens a grakn session using client-java to localhost.

Expected Output

If the keyspace already exists, the session should open within ~200ms.

Actual Output

The first session always takes slightly over 5000ms to open (it may take longer when opening a new keyspace).

The pom.xml generated by 'bazel build //:assemble-maven' contains missing info

Description

The pom.xml generated during Maven assembly or deployment contains missing information.

Environment

  1. OS (where Grakn server runs): N/A
  2. Grakn version (and platform): N/A
  3. Other environment details:

Reproducible Steps

$ bazel build //:assemble-maven
$ cat bazel-bin/pom.xml

Expected Output

<scm>
<connection>http://github.com/graknlabs/client-java</connection>
<developerConnection>http://github.com/graknlabs/client-java</developerConnection>
<tag>1.5.3</tag>
<url>http://github.com/graknlabs/client-java</url>
</scm>

Actual Output

<scm>
<connection>PROJECT_URL</connection>
<developerConnection>PROJECT_URL</developerConnection>
<tag>1.5.3</tag>
<url>PROJECT_URL</url>
</scm>

Additional information

Two transactions failing over concurrently may not work

Problem to Solve

Given an open Cluster session, if we attempt to open two transactions concurrently and they both fail, the failover logic from one transaction is likely to interfere with the other one, causing the failover to also fail.

The problem is that each failure triggers the current Core session associated with this Cluster session to be closed and discarded. With the current logic, that could cause a scenario where:

  1. Transaction 1 fails to open, so we close Session A and open Transaction 1 in Session B
  2. Transaction 2 fails to open, so we close Session B and open Transaction 2 in Session C
  3. Now Transaction 2 is working, but if we try to do a query in Transaction 1, it will fail because we closed Session B.

Proposed Solution

If two (or more) transactions from the same session concurrently fail to open, we should ensure that only one new Core session is created, and all transactions should then fail-over to it.

Meta Types break

Description

Asking tx.getMetaEntityType() breaks

Environment

Grakn Core 1.8 (master) and client-java 1.8 (master, 6074059).

Reproducible Steps


    @Test
    public void testMeta() {
        GraknClient.Session testing = client.session("testing");
        GraknClient.Transaction tx = testing.transaction().write();
        tx.execute(Graql.parse("define person sub entity;").asDefine());

        MetaType.Remote<?, ?> metaEntityType = tx.getMetaEntityType();
    }

Expected Output

Get the meta "entity" type.

Actual Output

Exception:

The concept [grakn.client.concept.type.impl.EntityTypeImpl.Remote{tx=grakn.client.GraknClient$Transaction@5ce8d869, id=V4152}] is not of type [interface grakn.client.concept.type.MetaType$Remote]
grakn.client.concept.GraknConceptException: 
	at grakn.client.concept.GraknConceptException.create(GraknConceptException.java:35)
	at grakn.client.concept.GraknConceptException.invalidCasting(GraknConceptException.java:42)
	at grakn.client.concept.Concept$Remote.asMetaType(Concept.java:559)
	at grakn.client.GraknClient$Transaction.getMetaEntityType(GraknClient.java:770)
	at grakn.client.test.integration.answer.AnswerIT.testMeta(AnswerIT.java:74)

Additional information

I thought we had tests that should hit this case?

If a client holding a schema session terminates abruptly, no client will subsequently be able to write any data

Description

If a client holding a schema session terminates abruptly, Grakn continues to hold a lock which makes it impossible for any client to subsequently write any data.

Environment

  1. OS (where Grakn server runs): Mac OS 11
  2. Grakn version (and platform): Grakn Core 2.0

Additional information

The following scenarios cause the client to stall waiting for a lock:

  • schema session is open; attempt to open 2nd schema session
  • schema session is open; attempt to open data write transaction
  • data write transaction is open; attempt to open schema session

Proposed Solution

We could mitigate this issue by automatically timing out sessions after some configurable period of time: we call this the idle timeout.

Conceptually, opening a session (any session - all sessions use server resources) will start a timer (say, 10 seconds) that will count down. If the timer reaches 0, the session is automatically closed. Note that the client will not be notified about this - they don't have a persistent connection that represents the session. They will simply get an error if they attempt to perform any operations after the session times out.

However, this should never occur with any of our own Grakn clients (console, client-java, client-python, client-nodejs...). To avoid sessions timing out while the user is sitting around thinking what operation they should do next - or while a query is running - we can introduce a pulse mechanism into all of our clients. The pulse "ticks" every 5 seconds and notifies the server that the session should be kept alive. When the server receives that notification, it should reset the session timer to its maximum value (eg 10 seconds).

Therefore a session will only time out in the following cases:

  • The latency between client and server exceeds 5 seconds - in that case, the client should specify a higher idle timeout
  • The client terminates abruptly - in this case, ending the session is desirable
  • The user is using a 3rd-party Grakn client that does not send pulses - in this case, it is their client's responsibility.

Java Concept API method to re(set) direct supertype of a given type

Problem to Solve

Use the Concept API with Java to (re)set the direct supertype of a given type via one single method call.

Current Workaround

Chain the sup and label methods: type.sup(Type type).label(Label label);
This in comparison with usage in clients Node.js and Python (i.e sup(type)) is less convinient.

Proposed Solution

Besides sup(), have another method (sup(Type type)) that accepts the new supertype as a parameter.

Additional Information

This feature request is to ensure consistency across our clients.

Detect incompatibility and raise a relevant exception

Problem to Solve

When there is an incompatibility between Client Java and the running Grakn Server, the exception thrown by the client seems unrelated to the real cause of the problem. In fact, the error message may even be incorrect, given that the keyspace does exist with the given name.

Current Workaround

If encountering this exception for the first time, only searching the web or asking the community can lead the user to the solution.

Proposed Solution

Have Client Java to identify the incompatibility and throw an exception with a helpful message. e.g. Client Java version 1.5.3 does not support the running Grakn Core server. Please refer to the documentation of client Java to ensure compatibility.

Additional Information

N/A

2.0: Add Label scoping for 2.0

Labels in 2.0 are able to be scoped to relations (in order to be specific in plays syntax). This is currently not supported in client-java and causes some of verifications tests to be unsatisfiable.

Improve the UX of waiting for a lock to open a session or transaction

Problem to Solve

Currently, if a user (say, a database admin) keeps a schema session open for a few minutes to perform some maintenance, any other client attempting to connect to the database and write some data will just stall indefinitely.

Current Workaround

Complain to the Grakn team asking why the client is stalling

Proposed Solution

Complain to the DB admin asking why they're taking so long.

But seriously- we could improve the UX of the client by adding an acquire-lock timeout - say 10 seconds; if the lock is still not available after 10 seconds, we throw an exception on the server that gets propagated back to the client.

That way the client will know that the server is busy doing something else. We could even be explicit and say that this is most likely caused by an open schema session (or data write transaction).

In the future we'll have an admin interface or maybe even a "force open" option that kills all conflicting sessions/transactions immediately.

TransactionTest intermittently stalls

Description

TransactionTest intermittently stalls.

Environment

  1. OS (where Grakn server runs): Mac OS 10
  2. Grakn version (and platform): Grakn Core 2.0

Additional information

It seems to occur about 1 in 5 runs when run as part of bazel test //test/behaviour/connection/.... I haven't managed to reproduce the issue when running it by itself, but that might just be coincidence.

Classpath contains multiple SLF4J Bindings - unwanted debug log

Description

whenever importing the GraknClient and having already used another logger (e.g. slf4j) I will get a warning and much more dramatic, an automated enabled debug mode for my whole project. It turns out dramatic, since importing graknclient will also enable my Kafka debug log, which prints hundreds to thousands of messages per minute.

SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/home/korny/.m2/repository/ch/qos/logback/logback-classic/1.1.3/logback-classic-1.1.3.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/home/korny/.m2/repository/org/jboss/slf4j/slf4j-jboss-logging/1.2.0.Final/slf4j-jboss-logging-1.2.0.Final.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [ch.qos.logback.classic.util.ContextSelectorStaticBinder]

Environment

1.Linux
2. Latest
3. client-java latest
4. Other environment details:

Reproducible Steps

  1. org.slf4j.Logger;
    import org.slf4j.LoggerFactory;
  2. import GraknClient

bazel test //test/behaviour/connection/... intermittently causes Grabl to run out of memory

Description

bazel test //test/behaviour/connection/... intermittently causes Grabl to run out of memory.

Environment

  1. OS (where Grakn server runs): Ubuntu 16.4
  2. Grakn version (and platform): Grakn Core 2.0
  3. Other environment details: Running in Grabl without RBE

Reproducible Steps

Run bazel test //test/behaviour/connection/... in Grabl. Intermittently, it fails with a status code 137 implying the system ran out of memory.

Cluster tests (which are run in CI) cannot be run by community contributors

Problem to Solve

Our Bazel workspace contains a private Grakn Cluster artifact for use when running Cluster tests, but it requires internal credentials, which community contributors don't have.

Current Workaround

Don't run these tests (or CI jobs)

Proposed Solution

N/A, see comments and linked issue

Review usages of '...' to denote an array filter in Concept methods

Problem to Solve

We discovered that Relation.players(RoleType...) was able to accept no parameters, and it would be treated as an empty array, leading to unexpected results. We fixed this case, but there may be others that need to be looked at.

Proposed Solution

In cases where T... is used to denote an array filter, refactor the relevant method into 2 methods, one with no parameters and one with List<T> or List<? extends T> as the parameter type.

[Java Client]: query.withTx(writeTransaction).execute() returns empty list when type mismatch

SCHEMA:

define
person sub entity,
has person-id,
plays employee;

company sub entity,
has name,
plays employer;

employment sub relationship,
relates employee,
relates employer;

person-id sub attribute datatype double;
name sub attribute datatype string;

JAVA CLIENT:

InsertQuery insertQuery = Graql.match(
var("p").isa("person").has("person-id", "1"),
var("c").isa("company").has("name", "grakn"))
.insert(var("e").isa("employment").rel("employer", "c").rel("employee", "p"));

Grakn.Transaction writeTransaction = session.transaction(GraknTxType.WRITE);

List insertedId = insertQuery.withTx(writeTransaction).execute();

=> RETURNS an EMPTY LIST; but should return an error saying attribute type person-id is of datatype double.

Add user-friendly methods to get keys, and instances by key

Problem to Solve

We want to discourage the use of IIDs and encourage the use of keys.

The most common thing that IIDs are used for is for identifying vertices. People can use getIID to get an identifier for their object. And there is no other way that's easy right now.

Current Workaround

We have getAttributes for identification, but it's fiddly, not clear about its purpose, and requires a network roundtrip. We have attributeType.getOwners(ownerType), but it feels like a very round-about way of getting an instance by key.

Proposed Solution

With a new syntactic-sugar method, we could improve the UX of key lookup:

  • Introduce ThingType.getInstanceByKey(keyType: AttributeType.Long, keyValue: long) (with overloads for string and datetime. We may also need getInstanceByCompositeKey(keys: Map<AttributeType, Object>)?)
  • I'm also thinking that having onlyKey as a parameter of Thing.getAttributes is not the best possible UX; perhaps we should introduce Thing.getKeys?
  • I'm also thinking that getting a key requiring a network roundtrip is in general not ideal at all, as it encourages using IIDs, not keys, to identify returned concepts. Should we perhaps make including the key with every Thing an option that defaults as true?

Additional Information

See also: vaticle/typedb#6175

There are already other ORMs that provide methods to retrieve data by primary key / key, for example Hibernate.

Missing path in javadoc and sources JARs

Description

It seems that grakn-...-javadoc.jar and grakn-...-sources.jar files published on https://repo.grakn.ai/repository/maven/ have wrong directory structure. They are missing the respective top folders. What I mean is that

  • grakn-client-1.8.1-... files donโ€™t contain the grakn/client/ folder
  • grakn-common-0.2.2-... files donโ€™t contain the grakn/common/ folder

This package mismatch makes it impossible to see methodsโ€™ javadocs and source code in some IDEs (notably Eclipse)

Environment

  1. OS (where Grakn server runs): N/A
  2. Grakn version (and platform): 1.8.2

Reproducible Steps

Compare the directory structure of the code JAR files and the sources and javadoc ones.

Expected Output

The sources and javadoc JAR file have the same directory structure as the code JAR files

Actual Output

The sources and javadoc JAR file have different directory structure than the code JAR files

test-deployment-maven-release should only pull from the release repository

The pom.xml is incorrect in that it has two repositories: snapshot and release: https://github.com/graknlabs/client-java/blob/master/test/deployment/pom.xml

    <repositories>
        <repository>
            <id>repo.grakn.ai.release</id>
            <name>repo.grakn.ai</name>
            <url>http://repo.grakn.ai/repository/maven/</url>
        </repository>
        <repository>
            <id>repo.grakn.ai.snapshot</id>
            <name>repo.grakn.ai</name>
            <url>http://repo.grakn.ai/repository/maven-snapshot/</url>
        </repository>
    </repositories>

This isn't good because it means we don't catch the case where we forgot to upload one of the artifacts to the release repository since the test will conveniently fallback to the snapshot repository.

The ideal solution would be to somehow include the snapshot repository for test-deployment-maven- snapshot, but exclude it for test-deployment-maven-release.

Multiple gRPC issues encountered when doing match query in parallel

Description

In Grabl, constructRepositorySummary function constructs response for the owner page - statuses of 30 last commits for a given repo. The data for those commits is queried in parallel:

private static Map<String, Map<String, JsonArray>> constructRepositorySummary(GraknClient.Session session, List<Symbol.Commit> commits) {
try (GraknClient.Transaction tx = session.transaction(GraknClient.Transaction.Type.READ)) {
    List<List<Map<String, String>>> repoSummaryAnswer = commits.stream().parallel().map(commit -> {
        GraqlMatch.Filtered query = queryRepoSummary(commit);
        List<ConceptMap> conceptMaps = tx.query().match(query).collect(toList());
        return conceptMaps.stream().map(conceptMap -> {
            Map<String, String> map = new HashMap<>();
            map.put("pipeline-name", conceptMap.get("pipeline-name").asAttribute().asString().getValue());
            map.put("workflow-name", conceptMap.get("workflow-name").asAttribute().asString().getValue());
            map.put("workflow-status", conceptMap.get("workflow-status").asAttribute().asString().getValue());
            return map;
        }).collect(toList());
    }).collect(toList());

    Map<String, Map<String, JsonArray>> repoSummary = initRepositorySummaryMap(); // map(pipeline -> map(workflow -> array(status)))
    for (int i = 0; i < repoSummaryAnswer.size(); ++i) {
        List<Map<String, String>> commitSummary = repoSummaryAnswer.get(i);
        for (Map<String, String> workflow: commitSummary) {
            String pipelineName = workflow.get("pipeline-name");
            String workflowName = workflow.get("workflow-name");
            String workflowStatus = workflow.get("workflow-status");
            repoSummary.get(pipelineName).get(workflowName).set(Constants.COMMIT_COUNT_HEADS - i - 1, workflowStatus);
        }
    }

    List <String> statusesList = commits.stream().parallel().map(commit -> getDependencyAnalysisStatus(commit, tx)).collect(toList());
    for (int i = 0; i < statusesList.size(); ++i) {
        repoSummary.get("analysis").get("dependency-analysis").set(Constants.COMMIT_COUNT_HEADS - i - 1, statusesList.get(i));
    }
    return repoSummary;
}

Session is graknClient.session(keyspace, GraknClient.Session.Type.DATA)

Failing lines are:

List<List<Map<String, String>>> repoSummaryAnswer = commits.stream().parallel().map(commit ...
and the next .parallel call.

When trying to load the page, this function consistently (every time) throws some of the following erros:

SEVERE: Exception while executing runnable io.grpc.internal.ServerImpl$JumpToApplicationThreadServerStreamListener$1MessagesAvailable@4dd0d326
io.grpc.StatusRuntimeException: INTERNAL: Invalid protobuf byte sequence
        at io.grpc.Status.asRuntimeException(Status.java:524)
        at io.grpc.protobuf.lite.ProtoLiteUtils$MessageMarshaller.parse(ProtoLiteUtils.java:218)
        at io.grpc.protobuf.lite.ProtoLiteUtils$MessageMarshaller.parse(ProtoLiteUtils.java:118)
        at io.grpc.MethodDescriptor.parseRequest(MethodDescriptor.java:307)
        at io.grpc.internal.ServerCallImpl$ServerStreamListenerImpl.messagesAvailableInternal(ServerCallImpl.java:309)
        at io.grpc.internal.ServerCallImpl$ServerStreamListenerImpl.messagesAvailable(ServerCallImpl.java:292)
        at io.grpc.internal.ServerImpl$JumpToApplicationThreadServerStreamListener$1MessagesAvailable.runInContext(ServerImpl.java:782)
        at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
        at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:123)
        at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
        at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
        at java.base/java.lang.Thread.run(Thread.java:834)
Caused by: com.google.protobuf.InvalidProtocolBufferException: Protocol message contained an invalid tag (zero).
        at com.google.protobuf.InvalidProtocolBufferException.invalidTag(InvalidProtocolBufferException.java:102)
        at com.google.protobuf.CodedInputStream$ArrayDecoder.readTag(CodedInputStream.java:628)
        at grakn.protocol.TransactionProto$Transaction$Req.<init>(TransactionProto.java:402)
        at grakn.protocol.TransactionProto$Transaction$Req.<init>(TransactionProto.java:369)
        at grakn.protocol.TransactionProto$Transaction$Req$1.parsePartialFrom(TransactionProto.java:3195)
        at grakn.protocol.TransactionProto$Transaction$Req$1.parsePartialFrom(TransactionProto.java:3189)
        at com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:86)
        at com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:48)
        at io.grpc.protobuf.lite.ProtoLiteUtils$MessageMarshaller.parseFrom(ProtoLiteUtils.java:223)
        at io.grpc.protobuf.lite.ProtoLiteUtils$MessageMarshaller.parse(ProtoLiteUtils.java:215)
        ... 10 more
grakn.client.common.exception.GraknClientException: null
        at grakn.client.common.exception.GraknClientException.of(GraknClientException.java:48)
        at grakn.client.rpc.RPCTransaction$Response$Done$Error.read(RPCTransaction.java:323)
        at grakn.client.rpc.RPCTransaction$ResponseCollector.take(RPCTransaction.java:247)
        at grakn.client.rpc.ResponseIterator.computeNext(ResponseIterator.java:59)
        at com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:145)
        at com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:140)
        at java.base/java.util.Iterator.forEachRemaining(Iterator.java:132)
        at java.base/java.util.Spliterators$IteratorSpliterator.forEachRemaining(Spliterators.java:1801)
        at java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:484)
        at java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:474)
        at java.base/java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:913)
        at java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
        at java.base/java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:578)
        at grabl.concept.OwnerPage.lambda$constructRepositorySummary$3(OwnerPage.java:108)
        at java.base/java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:195)
        at java.base/java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1655)
        at java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:484)
        at java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:474)
        at java.base/java.util.stream.ReduceOps$ReduceTask.doLeaf(ReduceOps.java:952)
        at java.base/java.util.stream.ReduceOps$ReduceTask.doLeaf(ReduceOps.java:926)
        at java.base/java.util.stream.AbstractTask.compute(AbstractTask.java:327)
        at java.base/java.util.concurrent.CountedCompleter.exec(CountedCompleter.java:746)
        at java.base/java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:290)
        at java.base/java.util.concurrent.ForkJoinTask.doInvoke(ForkJoinTask.java:408)
        at java.base/java.util.concurrent.ForkJoinTask.invoke(ForkJoinTask.java:736)
        at java.base/java.util.stream.ReduceOps$ReduceOp.evaluateParallel(ReduceOps.java:919)
        at java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:233)
        at java.base/java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:578)
        at grabl.concept.OwnerPage.constructRepositorySummary(OwnerPage.java:116)
        at grabl.concept.OwnerPage.constructRepository(OwnerPage.java:31)
        at grabl.service.Owner.constructOwnerPageChunk(Owner.java:60)
        at grabl.service.Owner.lambda$constructOwnerPage$0(Owner.java:50)
        at java.base/java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:195)
        at java.base/java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1655)
        at java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:484)
        at java.base/java.util.stream.ForEachOps$ForEachTask.compute(ForEachOps.java:290)
        at java.base/java.util.concurrent.CountedCompleter.exec(CountedCompleter.java:746)
        at java.base/java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:290)
        at java.base/java.util.concurrent.ForkJoinTask.doInvoke(ForkJoinTask.java:408)
        at java.base/java.util.concurrent.ForkJoinTask.invoke(ForkJoinTask.java:736)
        at java.base/java.util.stream.ForEachOps$ForEachOp.evaluateParallel(ForEachOps.java:159)
        at java.base/java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateParallel(ForEachOps.java:173)
        at java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:233)
        at java.base/java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:497)
        at grabl.service.Session.serveOwnerPage(Session.java:364)
        at grabl.service.Session.onMessage(Session.java:183)
        at akka.actor.typed.javadsl.BuiltReceive.receive(ReceiveBuilder.scala:184)
        at akka.actor.typed.javadsl.BuiltReceive.receiveMessage(ReceiveBuilder.scala:175)
        at akka.actor.typed.javadsl.Receive.receive(Receive.scala:53)
        at akka.actor.typed.javadsl.AbstractBehavior.receive(AbstractBehavior.scala:63)
        at akka.actor.typed.Behavior$.interpret(Behavior.scala:274)
        at akka.actor.typed.Behavior$.interpretMessage(Behavior.scala:230)
        at akka.actor.typed.internal.InterceptorImpl$$anon$2.apply(InterceptorImpl.scala:55)
        at akka.actor.typed.internal.SimpleSupervisor.aroundReceive(Supervision.scala:123)
        at akka.actor.typed.internal.InterceptorImpl.receive(InterceptorImpl.scala:83)
        at akka.actor.typed.Behavior$.interpret(Behavior.scala:274)
        at akka.actor.typed.Behavior$.interpretMessage(Behavior.scala:230)
        at akka.actor.typed.internal.adapter.ActorAdapter.handleMessage(ActorAdapter.scala:126)
        at akka.actor.typed.internal.adapter.ActorAdapter.aroundReceive(ActorAdapter.scala:106)
        at akka.actor.ActorCell.receiveMessage(ActorCell.scala:573)
        at akka.actor.ActorCell.invoke(ActorCell.scala:543)
        at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:269)
        at akka.dispatch.Mailbox.run(Mailbox.scala:230)
        at akka.dispatch.Mailbox.exec(Mailbox.scala:242)
        at java.base/java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:290)
        at java.base/java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1020)
        at java.base/java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1656)
        at java.base/java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1594)
        at java.base/java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:183)
2021-02-17 10:41:13,724 [grakn-core-main::5] [ERROR] g.core.server.rpc.TransactionStream - [SRV14] Invalid Server Operation: The request message was not recognised.
grakn.core.common.exception.GraknException: [SRV14] Invalid Server Operation: The request message was not recognised.
        at grakn.core.common.exception.GraknException.of(GraknException.java:55)
        at grakn.core.server.rpc.TransactionRPC.handleRequest(TransactionRPC.java:124)
        at grakn.core.server.rpc.TransactionStream.handleRequest(TransactionStream.java:118)
        at grakn.core.server.rpc.TransactionStream.onNext(TransactionStream.java:82)
        at grakn.core.server.rpc.TransactionStream.onNext(TransactionStream.java:48)
        at io.grpc.stub.ServerCalls$StreamingServerCallHandler$StreamingServerCallListener.onMessage(ServerCalls.java:255)
        at io.grpc.internal.ServerCallImpl$ServerStreamListenerImpl.messagesAvailableInternal(ServerCallImpl.java:309)
        at io.grpc.internal.ServerCallImpl$ServerStreamListenerImpl.messagesAvailable(ServerCallImpl.java:292)
        at io.grpc.internal.ServerImpl$JumpToApplicationThreadServerStreamListener$1MessagesAvailable.runInContext(ServerImpl.java:782)
        at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
        at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:123)
        at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
        at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
        at java.base/java.lang.Thread.run(Thread.java:834)
java.lang.IllegalArgumentException: java.lang.IllegalArgumentException: null frame before EOS
        at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
        at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)
        at java.base/java.util.concurrent.ForkJoinTask.getThrowableException(ForkJoinTask.java:600)
        at java.base/java.util.concurrent.ForkJoinTask.reportException(ForkJoinTask.java:678)
        at java.base/java.util.concurrent.ForkJoinTask.invoke(ForkJoinTask.java:737)
        at java.base/java.util.stream.ReduceOps$ReduceOp.evaluateParallel(ReduceOps.java:919)
        at java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:233)
        at java.base/java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:578)
        at grabl.concept.OwnerPage.constructRepositorySummary(OwnerPage.java:116)
        at grabl.concept.OwnerPage.constructRepository(OwnerPage.java:31)
        at grabl.service.Owner.constructOwnerPageChunk(Owner.java:60)
        at grabl.service.Owner.lambda$constructOwnerPage$0(Owner.java:50)
        at java.base/java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:195)
        at java.base/java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1655)
        at java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:484)
        at java.base/java.util.stream.ForEachOps$ForEachTask.compute(ForEachOps.java:290)
        at java.base/java.util.concurrent.CountedCompleter.exec(CountedCompleter.java:746)
        at java.base/java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:290)
        at java.base/java.util.concurrent.ForkJoinTask.doInvoke(ForkJoinTask.java:408)
        at java.base/java.util.concurrent.ForkJoinTask.invoke(ForkJoinTask.java:736)
        at java.base/java.util.stream.ForEachOps$ForEachOp.evaluateParallel(ForEachOps.java:159)
        at java.base/java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateParallel(ForEachOps.java:173)
        at java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:233)
        at java.base/java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:497)
        at grabl.service.Session.serveOwnerPage(Session.java:364)
        at grabl.service.Session.onMessage(Session.java:183)
        at akka.actor.typed.javadsl.BuiltReceive.receive(ReceiveBuilder.scala:184)
        at akka.actor.typed.javadsl.BuiltReceive.receiveMessage(ReceiveBuilder.scala:175)
        at akka.actor.typed.javadsl.Receive.receive(Receive.scala:53)
        at akka.actor.typed.javadsl.AbstractBehavior.receive(AbstractBehavior.scala:63)
        at akka.actor.typed.Behavior$.interpret(Behavior.scala:274)
        at akka.actor.typed.Behavior$.interpretMessage(Behavior.scala:230)
        at akka.actor.typed.internal.InterceptorImpl$$anon$2.apply(InterceptorImpl.scala:55)
        at akka.actor.typed.internal.SimpleSupervisor.aroundReceive(Supervision.scala:123)
        at akka.actor.typed.internal.InterceptorImpl.receive(InterceptorImpl.scala:83)
        at akka.actor.typed.Behavior$.interpret(Behavior.scala:274)
        at akka.actor.typed.Behavior$.interpretMessage(Behavior.scala:230)
        at akka.actor.typed.internal.adapter.ActorAdapter.handleMessage(ActorAdapter.scala:126)
        at akka.actor.typed.internal.adapter.ActorAdapter.aroundReceive(ActorAdapter.scala:106)
        at akka.actor.ActorCell.receiveMessage(ActorCell.scala:573)
        at akka.actor.ActorCell.invoke(ActorCell.scala:543)
        at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:269)
        at akka.dispatch.Mailbox.run(Mailbox.scala:230)
        at akka.dispatch.Mailbox.exec(Mailbox.scala:242)
        at java.base/java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:290)
        at java.base/java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1020)
        at java.base/java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1656)
        at java.base/java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1594)
        at java.base/java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:183)
Caused by: java.lang.IllegalArgumentException: null frame before EOS
        at com.google.common.base.Preconditions.checkArgument(Preconditions.java:135)
        at io.grpc.internal.AbstractClientStream.deliverFrame(AbstractClientStream.java:186)
        at io.grpc.internal.MessageFramer.commitToSink(MessageFramer.java:352)
        at io.grpc.internal.MessageFramer.flush(MessageFramer.java:302)
        at io.grpc.internal.AbstractStream.flush(AbstractStream.java:75)
        at io.grpc.internal.ForwardingClientStream.flush(ForwardingClientStream.java:42)
        at io.grpc.internal.ClientCallImpl.sendMessageInternal(ClientCallImpl.java:593)
        at io.grpc.internal.ClientCallImpl.sendMessage(ClientCallImpl.java:564)
        at io.grpc.stub.ClientCalls$CallToStreamObserverAdapter.onNext(ClientCalls.java:364)
        at grakn.client.rpc.RPCTransaction.stream(RPCTransaction.java:175)
        at grakn.client.query.QueryManager.iterateQuery(QueryManager.java:156)
        at grakn.client.query.QueryManager.match(QueryManager.java:58)
        at grakn.client.query.QueryManager.match(QueryManager.java:52)
        at grabl.concept.OwnerPage.lambda$constructRepositorySummary$3(OwnerPage.java:108)
        at java.base/java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:195)
        at java.base/java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1655)
        at java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:484)
        at java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:474)
        at java.base/java.util.stream.ReduceOps$ReduceTask.doLeaf(ReduceOps.java:952)
        at java.base/java.util.stream.ReduceOps$ReduceTask.doLeaf(ReduceOps.java:926)
        at java.base/java.util.stream.AbstractTask.compute(AbstractTask.java:327)
        at java.base/java.util.concurrent.CountedCompleter.exec(CountedCompleter.java:746)
        ... 5 common frames omitted
io.netty.util.IllegalReferenceCountException: refCnt: 0, decrement: 1
        at io.netty.util.internal.ReferenceCountUpdater.toLiveRealRefCnt(ReferenceCountUpdater.java:74)
        at io.netty.util.internal.ReferenceCountUpdater.release(ReferenceCountUpdater.java:138)
        at io.netty.buffer.AbstractReferenceCountedByteBuf.release(AbstractReferenceCountedByteBuf.java:100)
        at io.netty.buffer.CompositeByteBuf$Component.free(CompositeByteBuf.java:1941)
        at io.netty.buffer.CompositeByteBuf.deallocate(CompositeByteBuf.java:2246)
        at io.netty.buffer.AbstractReferenceCountedByteBuf.handleRelease(AbstractReferenceCountedByteBuf.java:110)
        at io.netty.buffer.AbstractReferenceCountedByteBuf.release(AbstractReferenceCountedByteBuf.java:100)
        at io.netty.buffer.AbstractDerivedByteBuf.release0(AbstractDerivedByteBuf.java:94)
        at io.netty.buffer.AbstractDerivedByteBuf.rFeb 17, 2021 10:52:53 AM io.grpc.internal.SerializingExecutor run
SEVERE: Exception while executing runnable io.grpc.internal.ServerImpl$JumpToApplicationThreadServerStreamListener$1MessagesAvailable@bd454cd
io.grpc.StatusRuntimeException: INTERNAL: Invalid protobuf byte sequence
        at io.grpc.Status.asRuntimeException(Status.java:524)
        at io.grpc.protobuf.lite.ProtoLiteUtils$MessageMarshaller.parse(ProtoLiteUtils.java:218)
        at io.grpc.protobuf.lite.ProtoLiteUtils$MessageMarshaller.parse(ProtoLiteUtils.java:118)
        at io.grpc.MethodDescriptor.parseRequest(MethodDescriptor.java:307)
        at io.grpc.internal.ServerCallImpl$ServerStreamListenerImpl.messagesAvailableInternal(ServerCallImpl.java:309)
        at io.grpc.internal.ServerCallImpl$ServerStreamListenerImpl.messagesAvailable(ServerCallImpl.java:292)
        at io.grpc.internal.ServerImpl$JumpToApplicationThreadServerStreamListener$1MessagesAvailable.runInContext(ServerImpl.java:782)
        at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
        at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:123)
        at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
        at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
        at java.base/java.lang.Thread.run(Thread.java:834)
Caused by: com.google.protobuf.InvalidProtocolBufferException: Protocol message contained an invalid tag (zero).
        at com.google.protobuf.InvalidProtocolBufferException.invalidTag(InvalidProtocolBufferException.java:102)
        at com.google.protobuf.CodedInputStream$ArrayDecoder.readTag(CodedInputStream.java:628)
        at grakn.protocol.TransactionProto$Transaction$Req.<init>(TransactionProto.java:402)
        at grakn.protocol.TransactionProto$Transaction$Req.<init>(TransactionProto.java:369)
        at grakn.protocol.TransactionProto$Transaction$Req$1.parsePartialFrom(TransactionProto.java:3195)
        at grakn.protocol.TransactionProto$Transaction$Req$1.parsePartialFrom(TransactionProto.java:3189)
        at com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:86)
        at com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:48)
        at io.grpc.protobuf.lite.ProtoLiteUtils$MessageMarshaller.parseFrom(ProtoLiteUtils.java:223)
        at io.grpc.protobuf.lite.ProtoLiteUtils$MessageMarshaller.parse(ProtoLiteUtils.java:215)
        ... 10 more

elease(AbstractDerivedByteBuf.java:90)
        at io.netty.util.ReferenceCountUtil.release(ReferenceCountUtil.java:88)
        at io.netty.util.ReferenceCountUtil.safeRelease(ReferenceCountUtil.java:113)
        at io.netty.channel.ChannelOutboundBuffer.remove(ChannelOutboundBuffer.java:271)
        at io.netty.channel.ChannelOutboundBuffer.removeBytes(ChannelOutboundBuffer.java:352)
        at io.netty.channel.socket.nio.NioSocketChannel.doWrite(NioSocketChannel.java:431)
        at io.netty.channel.AbstractChannel$AbstractUnsafe.flush0(AbstractChannel.java:930)
        at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.flush0(AbstractNioChannel.java:354)
        at io.netty.channel.AbstractChannel$AbstractUnsafe.flush(AbstractChannel.java:897)
        at io.netty.channel.DefaultChannelPipeline$HeadContext.flush(DefaultChannelPipeline.java:1372)
        at io.netty.channel.AbstractChannelHandlerContext.invokeFlush0(AbstractChannelHandlerContext.java:750)
        at io.netty.channel.AbstractChannelHandlerContext.invokeFlush(AbstractChannelHandlerContext.java:742)
        at io.netty.channel.AbstractChannelHandlerContext.flush(AbstractChannelHandlerContext.java:728)
        at io.netty.handler.codec.http2.Http2ConnectionHandler.flush(Http2ConnectionHandler.java:189)
        at io.netty.channel.AbstractChannelHandlerContext.invokeFlush0(AbstractChannelHandlerContext.java:750)
        at io.netty.channel.AbstractChannelHandlerContext.invokeFlush(AbstractChannelHandlerContext.java:742)
        at io.netty.channel.AbstractChannelHandlerContext.flush(AbstractChannelHandlerContext.java:728)
        at io.netty.channel.DefaultChannelPipeline.flush(DefaultChannelPipeline.java:967)
        at io.netty.channel.AbstractChannel.flush(AbstractChannel.java:242)
        at io.grpc.netty.WriteQueue.flush(WriteQueue.java:147)
        at io.grpc.netty.WriteQueue.access$000(WriteQueue.java:34)
        at io.grpc.netty.WriteQueue$1.run(WriteQueue.java:46)
        at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:164)
        at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:472)
        at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:500)
        at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
        at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
        at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
        at java.base/java.lang.Thread.run(Thread.java:834)

Also, sometimes UTF-8 related ones that I didn't copy.

Queries themselves look like this

match
$commit isa commit, has symbol "VladGan/console@ec8270115acebb1d94aeaa72cbd57fa385e9a58b";
$_ (trigger: $commit, pipeline: $pipeline) isa pipeline-automation;
$pipeline has name $pipeline-name, has latest true;
$_ (pipeline: $pipeline, workflow: $workflow) isa pipeline-workflow;
$workflow has name $workflow-name, has status $workflow-status, has latest true;
get $pipeline-name, $workflow-name, $workflow-status;

(commit's symbol is a changeable part)

I verified that without .parallel same queries consistently work fine.

Link to the schema and data files:

https://www.dropbox.com/home/Engineering/Grabl/Issues/Grakn-2.0%20issues/275

CI build attempts to upload Grakn many times to a remote server

Description

Grabl is attempting to upload Grakn many times to a remote server on each build.

This is caused by RBE taking every Bazel target, getting its output and sending it to a remote server.

Environment

  1. OS (where Grakn server runs): Ubuntu 16.4
  2. Grakn version (and platform): -

Make 'isInferred' a Local Thing method

Problem to Solve

Workbase is slowed down by a large number of roundtrips to retrieve the inferred flag on Things

Current Workaround

It's slow

Proposed Solution

Make isInferred a Local Thing method

Additional Information

It was local in 1.8 but was made remote to reduce payload size and simplify the protocol. However 1 bit is an acceptable cost.

Intermittently failing test that has 2 checks for erroring transactions in quick succession

Problem to Solve

The scenario:

Scenario: uncommitted transaction writes are not persisted
    When graql define
      """
      define dog sub entity;
      """
    When session opens transaction of type: read
    Then graql match; throws exception
      """
      match $x type dog;
      """

intermittently hangs for 10 seconds in Clients Java and NodeJS, and throws an error in Client Python. This error occurs in the tear-down steps - not any of the actual scenario steps.

It's not currently clear why it occurs. It always passes in Core, but causes issues in all of our clients.

Current Workaround

We ignore the test and accept that, occasionally, errors in concurrent transactions will cause clients to hang for a few seconds.

Proposed Solution

We should look into what's really going on and ensure that the error is propagated and managed gracefully.

Organise ErrorMessage and TypeDBClientException better

Description

Our error messages are pretty inconsistent and some of them are odd and misleading.

For example:

[CLI2] Illegal Client Operation: Unable to connect to TypeDB server.

How is this an illegal client operation? Sounds more like a connection error to me.

[CLI3] Illegal Client Operation: Value cannot be less than 1, was '-42'.

Value? What value? This error message (which I believe refers to batch size) should be more precise.

[CLI7] Illegal Client Operation: Received a response with unknown request id '00000000-0000-0000-0000-000000000000'

This is definitely our own internal issue, and should probably be an "Internal Error".

All of these should be cleaned up.

And also...

TypeDBClientException is a little bit convoluted, with a mixture of multiple constructors and static of methods.

For consistency with TypeDB Core we should be focusing on the of methods while making the constructor(s) protected or private.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.