Git Product home page Git Product logo

typedb's Introduction

TypeDB

Factory CircleCI GitHub release Discord Discussion Forum Stack Overflow Stack Overflow Hosted By: Cloudsmith

Introducing TypeDB

TypeDB is a polymorphic database with a conceptual data model, a strong subtyping system, a symbolic reasoning engine, and a beautiful and elegant type-theoretic language TypeQL.

IMPORTANT NOTE: TypeDB & TypeQL are in the process of being rewritten in Rust. There will be significant refinement to the language, and minor breaks in backwards compatibility. Learn about the changes on our roadmap issue on GitHub. The biggest change to TypeDB 3.0 will be our storage data structure and architecture that significantly boosts performance. We’re aiming to release 3.0 in the summer this year, along with preliminary benchmarks of TypeDB.

Polymorphic databases

Why TypeDB was built

Data frequently exhibits polymorphic features in the form of inheritance hierarchies and interface dependencies. TypeDB was crafted to solve the inability of current database paradigms to natively express these polymorphic features.

Providing full support for polymorphism

In order to fully support polymorphism, a database needs to implement three key components:

The TypeDB database

The schema

TypeDB schemas are based on a modern type system that natively supports inheritance and interfaces, and follows a conceptual data modeling approach, in which user-defined types subtype (based on their function) three root types: entitiesrelations, and attributes.

  • Entities are independent objects,
  • Relations depend on their role interfaces played by either entities or relations,
  • Attributes are properties with a value that can be owned by entities or relations.

Interface and inheritance for these types can be combined in many ways, resulting in highly expressive ways of modeling data.

define

full-name sub attribute, value string;
id sub attribute, value string;
email sub id;
employee-id sub id;

user sub entity,
    owns full-name,
    owns email @unique,
    plays mentorship:trainee;
employee sub user,
    owns employee-id @key,
    plays mentorship:mentor;

mentorship sub relation,
    relates mentor,
    relates trainee;

The query language

The query language of TypeDB is TypeQL. The syntax of TypeQL is fully variablizable and provides native support for polymorphic queries. The language is based on fully declarative and composable patterns, mirroring the structure of natural language.

match $user isa user,
    has full-name $name,
    has email $email;
# This returns all users of any type

match $user isa employee,
    has full-name $name,
    has email $email,
    has employee-id $id;
# This returns only users who are employees

match $user-type sub user;
$user isa $user-type,
    has full-name $name,
    has email $email;
# This returns all users and their type

The inference engine

Any query in TypeDB is semantically validated by TypeDB’s inference engine for consistency with the database schema. This prevents invalid schema updates and data inserts before they can affect the integrity of the database.

TypeDB can also work with data that is not physically stored in the database, but instead logically inferred based on user-specified rules. This enables developers to cleanly separate their source data from their application logic, often allowing for complex systems to be described by combinations of simple rules.

define
rule transitive-team-membership:
    when {
        (team: $team-1, member: $team-2) isa team-membership;
        (team: $team-2, member: $member) isa team-membership;
    } then {
        (team: $team-1, member: $member) isa team-membership;
    };

insert
$john isa user, has email "[email protected]";
$eng isa team, has name "Engineering ";
$cloud isa team, has name "Cloud";
(team: $eng, member: $cloud) isa team-membership;
(team: $cloud, member: $john) isa team-membership;

match
$john isa user, has email "[email protected]";
(team: $team, member: $john) isa team-membership;
# This will return both Cloud and Engineering for $team due to the defined rule

Effective database engineering

TypeDB breaks down the patchwork of existing database paradigms into three fundamental ingredients: types, inheritance, and interfaces. This provides a unified way of working with data across all database applications, that directly impacts development:

Installation and editions

TypeDB editions

  • TypeDB Cloud — multi-cloud DBaaS
  • TypeDB Cloud self-hosted — allows you to deploy TypeDB Cloud in your own environment
  • TypeDB Core — Open-source edition of TypeDB ← This repository

For a comparison of all three editions, see the Deploy page on our website.

Download and run TypeDB Core

You can download TypeDB from the GitHub Releases.

Check our Installation guide to get started.

Compiling TypeDB Core from source

Note: You DO NOT NEED to compile TypeDB from the source if you just want to use TypeDB. See the "Download and Run TypeDB Core" section above.

  1. Make sure you have the following dependencies installed on your machine:

  2. You can build TypeDB with either one of the following commands, depending on the targeted architecture and Operation system:

    $ bazel build //:assemble-linux-x86_64-targz
    $ bazel build //:assemble-linux-arm64-targz
    $ bazel build //:assemble-mac-x86_64-zip
    $ bazel build //:assemble-mac-arm64-zip
    $ bazel build //:assemble-windows-x86_64-zip

    Outputs to: bazel-bin/.

  3. If you're on a Mac and would like to run any bazel test commands, you will need to install:

    • snappy: brew install snappy
    • jemalloc: brew install jemalloc

Resources

Developer resources

Useful links

If you want to begin your journey with TypeDB, you can explore the following resources:

Contributions

TypeDB and TypeQL are built using various open-source frameworks and technologies throughout its evolution. Today TypeDB and TypeQL use Speedb, pest, SCIP, Bazel, gRPC, ZeroMQ, and Caffeine.

Thank you!

In the past, TypeDB was enabled by various open-source products and communities that we are hugely thankful to: RocksDB, ANTLR, Apache Cassandra, Apache Hadoop, Apache Spark, Apache TinkerPop, and JanusGraph.

Package hosting

Package repository hosting is graciously provided by Cloudsmith. Cloudsmith is the only fully hosted, cloud-native, universal package management solution, that enables your organization to create, store and share packages in any format, to any place, with total confidence.

Licensing

This software is developed by Vaticle.
It's released under the Mozilla Public License 2.0 (MPL 2.0). For license information, please see LICENSE.

Vaticle also provides a commercial license for TypeDB Cloud self-hosted - get in touch with our team at [email protected].

Copyright (C) 2023 Vaticle.

typedb's People

Contributors

aelred avatar alexandraorth avatar alexjpwalker avatar bolerio avatar burukuru avatar denislobanov avatar dmitrii-ubskii avatar flyingsilverfin avatar fppt avatar grabl avatar haikalpribadi avatar idealley avatar izmalk avatar james-whiteside avatar jmsfltchr avatar kasper-piskorski avatar krishnangovindraj avatar lolski avatar lriuui0x0 avatar lukas-slezevicius avatar marco-scoppetta avatar maxbaxt avatar micheleorsi avatar mikonapoli avatar pluraliseseverythings avatar sheldonkhall avatar shiladitya-mukherjee avatar stichbury avatar vaticle-bot avatar vmax avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

typedb's Issues

Relations with resources as single VarAdmin

General

#684

Analytics

The following var is interpreted as Relation and passed as a whole, without decomposing to properties:

$rel1 (happening: $b, protagonist: $p) isa event-protagonist has role "parent";

Cannot equals with strings

Migration

in a graql template if you do 

@Equal(this "that) or if(eq this "that") wanting to compare string values it does not work

Null pointer when materialising relation with sub roles

Reasoner

another interesting one. match ($x, $y) isa marriage; is working, but if I try and match the relation itself: match $r ($x, $y) isa marriage; it fails 😞  
this is with materialisation on. I get this stack trace:

 ```

java.lang.NullPointerException: null
    at ai.grakn.graql.internal.reasoner.query.AtomicQuery.lambda$null$2(AtomicQuery.java:133)
    at java.util.HashMap$EntrySet.forEach(HashMap.java:1043)
    at ai.grakn.graql.internal.reasoner.query.AtomicQuery.lambda$materialiseComplete$3(AtomicQuery.java:132)
    at java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:184)
    at java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:175)
    at java.util.HashMap$KeySpliterator.forEachRemaining(HashMap.java:1548)
    at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481)
    at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471)
    at java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:151)
    at java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:174)
    at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
    at java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:418)
    at ai.grakn.graql.internal.reasoner.query.AtomicQuery.materialiseComplete(AtomicQuery.java:123)
    at ai.grakn.graql.internal.reasoner.query.AtomicQuery.materialise(AtomicQuery.java:176)
    at ai.grakn.graql.internal.reasoner.query.AtomicMatchQuery.lambda$materialise$2(AtomicMatchQuery.java:85)
    at java.lang.Iterable.forEach(Iterable.java:75)
    at ai.grakn.graql.internal.reasoner.query.AtomicMatchQuery.materialise(AtomicMatchQuery.java:78)
    at ai.grakn.graql.Reasoner.answerWM(Reasoner.java:172)
    at ai.grakn.graql.Reasoner.answer(Reasoner.java:243)
    at ai.grakn.graql.Reasoner.resolveAtomicQuery(Reasoner.java:259)
    at ai.grakn.graql.Reasoner.resolveConjunctiveQuery(Reasoner.java:272)
    at ai.grakn.graql.Reasoner.lambda$resolve$8(Reasoner.java:294)
    at java.lang.Iterable.forEach(Iterable.java:75)
    at ai.grakn.graql.Reasoner.resolve(Reasoner.java:292)
    at ai.grakn.graql.Reasoner.resolveToQuery(Reasoner.java:308)
    at ai.grakn.engine.controller.VisualiserController.matchQuery(VisualiserController.java:145)

Clean Up Variable Scoping

Migration

In Alex's own words: Automating scoping of variables in the templating language need to be reworked. And there are others "inconsistencies" in the template language.

Duplicates On Simple Match Query

Graql

My first bug report. This brings me so much joy.

I create a simple ontology with:

product isa entity-type;
book sub product;
video sub product;
music sub product;
dvd sub video;

 

Then I run this simple query and get duplicates:

 

>>> match $x isa entity-type;
$x type-name customer isa entity-type;
$x type-name product isa entity-type;
$x type-name music isa entity-type;
$x type-name book isa entity-type;
$x type-name video isa entity-type;
$x type-name dvd isa entity-type;
$x type-name review isa entity-type;
$x type-name video isa entity-type;
$x type-name dvd isa entity-type;

Materialising indirect types

Reasoner

Genealogy data set:

 

match $x isa document; ($x, $y); $y isa $z; $z isa entity-type;

 

Leads to, among others, the following materialisation:

 

insert

$y1 id "3089"
$z-type-4b7228c0-144e-45be-aa1c-fe7f901f4026 id "2"
$z isa $z-type-4b7228c0-144e-45be-aa1c-fe7f901f4026
$rel-d83ff130-2cc2-49ce-9001-9f6295375cc4 id "557"
$y isa $z
$y id "3982"
isa $rel-d83ff130-2cc2-49ce-9001-9f6295375cc4 (spouse2: $y1, spouse1: $y)

 

which misses the reference for type of z;

Migration is failing silently

Migration

When trying to load a CSV file I get this:

Migrating data ../protestdata_1.csv using Grakn Engine localhost:4567 into graph grakn
Migration complete.
Initiating shutdown...

But when I query for anything I only get the ontology back.

When looking at my engine logs I see the following:

3:11:27.366 [pool-2-thread-3] ERROR ai.grakn.engine.loader.Loader - Caught exception
ai.grakn.exception.InvalidConceptValueException: The value ['5'] must be of datatype ['java.lang.Long']
    at ai.grakn.graph.internal.ResourceImpl.setValue(ResourceImpl.java:103)
    at ai.grakn.graph.internal.ResourceImpl.<init>(ResourceImpl.java:42)
    at ai.grakn.graph.internal.ElementFactory.buildResource(ElementFactory.java:95)
    at ai.grakn.graph.internal.ResourceTypeImpl.lambda$putResource$0(ResourceTypeImpl.java:82)
    at ai.grakn.graph.internal.TypeImpl.addInstance(TypeImpl.java:64)
    at ai.grakn.graph.internal.ResourceTypeImpl.putResource(ResourceTypeImpl.java:81)
    at ai.grakn.graql.internal.query.InsertQueryExecutor.lambda$putConceptByType$11(InsertQueryExecutor.java:263)
    at java.util.Optional.orElseGet(Optional.java:267)
    at ai.grakn.graql.internal.query.InsertQueryExecutor.addOrGetInstance(InsertQueryExecutor.java:285)
    at ai.grakn.graql.internal.query.InsertQueryExecutor.putConceptByType(InsertQueryExecutor.java:262)
    at ai.grakn.graql.internal.query.InsertQueryExecutor.lambda$addConcept$6(InsertQueryExecutor.java:170)
    at java.util.Optional.map(Optional.java:215)
    at ai.grakn.graql.internal.query.InsertQueryExecutor.addConcept(InsertQueryExecutor.java:170)
    at ai.grakn.graql.internal.query.InsertQueryExecutor.lambda$getConcept$5(InsertQueryExecutor.java:142)
    at java.util.HashMap.computeIfAbsent(HashMap.java:1126)
    at ai.grakn.graql.internal.query.InsertQueryExecutor.getConcept(InsertQueryExecutor.java:142)
    at ai.grakn.graql.internal.query.InsertQueryExecutor.insertVar(InsertQueryExecutor.java:122)
    at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193)
    at java.util.Iterator.forEachRemaining(Iterator.java:116)
    at java.util.Spliterators$IteratorSpliterator.forEachRemaining(Spliterators.java:1801)
    at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481)
    at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471)
    at java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:151)
    at java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:174)
    at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
    at java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:418)
    at ai.grakn.graql.internal.query.InsertQueryImpl.execute(InsertQueryImpl.java:89)
    at ai.grakn.graql.internal.query.InsertQueryImpl.execute(InsertQueryImpl.java:46)
    at ai.grakn.engine.loader.LoaderTask.lambda$insertQueriesInOneTransaction$0(LoaderTask.java:105)
    at java.util.ArrayList.forEach(ArrayList.java:1249)
    at ai.grakn.engine.loader.LoaderTask.insertQueriesInOneTransaction(LoaderTask.java:105)
    at ai.grakn.engine.loader.LoaderTask.attemptInsertions(LoaderTask.java:84)
    at ai.grakn.engine.loader.LoaderTask.start(LoaderTask.java:60)
    at ai.grakn.engine.backgroundtasks.InMemoryTaskManager.lambda$exceptionCatcher$1(InMemoryTaskManager.java:178)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)

Migration needs to notify the user of errors.

Null pointer when materialising relation with sub roles

General

#685

Reasoner

another interesting one. match ($x, $y) isa marriage; is working, but if I try and match the relation itself: match $r ($x, $y) isa marriage; it fails 😞  
this is with materialisation on. I get this stack trace:

 ```

java.lang.NullPointerException: null
    at ai.grakn.graql.internal.reasoner.query.AtomicQuery.lambda$null$2(AtomicQuery.java:133)
    at java.util.HashMap$EntrySet.forEach(HashMap.java:1043)
    at ai.grakn.graql.internal.reasoner.query.AtomicQuery.lambda$materialiseComplete$3(AtomicQuery.java:132)
    at java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:184)
    at java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:175)
    at java.util.HashMap$KeySpliterator.forEachRemaining(HashMap.java:1548)
    at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481)
    at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471)
    at java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:151)
    at java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:174)
    at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
    at java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:418)
    at ai.grakn.graql.internal.reasoner.query.AtomicQuery.materialiseComplete(AtomicQuery.java:123)
    at ai.grakn.graql.internal.reasoner.query.AtomicQuery.materialise(AtomicQuery.java:176)
    at ai.grakn.graql.internal.reasoner.query.AtomicMatchQuery.lambda$materialise$2(AtomicMatchQuery.java:85)
    at java.lang.Iterable.forEach(Iterable.java:75)
    at ai.grakn.graql.internal.reasoner.query.AtomicMatchQuery.materialise(AtomicMatchQuery.java:78)
    at ai.grakn.graql.Reasoner.answerWM(Reasoner.java:172)
    at ai.grakn.graql.Reasoner.answer(Reasoner.java:243)
    at ai.grakn.graql.Reasoner.resolveAtomicQuery(Reasoner.java:259)
    at ai.grakn.graql.Reasoner.resolveConjunctiveQuery(Reasoner.java:272)
    at ai.grakn.graql.Reasoner.lambda$resolve$8(Reasoner.java:294)
    at java.lang.Iterable.forEach(Iterable.java:75)
    at ai.grakn.graql.Reasoner.resolve(Reasoner.java:292)
    at ai.grakn.graql.Reasoner.resolveToQuery(Reasoner.java:308)
    at ai.grakn.engine.controller.VisualiserController.matchQuery(VisualiserController.java:145)

Broken links in Javadocs

Documentation

Something that is puzzling me (among the many things) is a broken link in javadocs. There are a few doing this. Taking this as an example - the links to GraknGraphFactory on this page https://static.javadoc.io/ai.grakn/grakn-graph/0.7.0/ai/grakn/factory/GraknGraphFactoryInMemory.html

Are pointing at the docs portal: https://grakn.ai/pages/platform/index.html/grakn-core/apidocs/ai/grakn/GraknGraphFactory.html?is-external=true

Rather than at the javadoc.io collection. Is this because they are linking across packages?

 

Is is possible that there is still a setting in the buildscripts somewhere that needs to be reset to javadoc.io?

graph.isClosed() method bug using Titan

Graph

import ai.grakn.Grakn;
import ai.grakn.GraknGraph;
import ai.grakn.factory.GraphFactory;
import ai.grakn.test.AbstractEngineTest;
import org.junit.Test;

import java.util.concurrent.Executors;
import java.util.concurrent.Future;

import static org.junit.Assert.assertFalse;

public class GraphIsClosedTest extends AbstractEngineTest {

private static final String KEYSPACE = "isitclosed";

@test
public void isClosedTest() throws Exception {
GraknGraph graph = Grakn.factory(Grakn.DEFAULT_URI, KEYSPACE).getGraph();
graph.putEntityType("thing");
graph.commit();

assertFalse(graph.isClosed());

Future future = Executors.newSingleThreadExecutor().submit(this::addThingToBatch);
future.get();

assertFalse(graph.isClosed());
System.out.println(graph.getEntityType("thing").instances());
}

public void addThingToBatch(){
try(GraknGraph graphBatchLoading = GraphFactory.getInstance().getGraph(KEYSPACE)) {
graphBatchLoading.getEntityType("thing").addEntity();
graphBatchLoading.commit();
} catch (Exception e){
throw new RuntimeException(e);
}
}
}

 

The above test throws a "graph is closed" exception at line 30 (graph.getEntityType("thing").instances()) although we have asserted on the previous line that it is not.

Patch Titan Transaction Count

Factory

This test:

 

GraknTitanGraphFactpryTest:testMultithreadedRetrievalOfGraphs

 

is failing because we are auto closing the graph when transactions are still open. We are autoclosing when we shouldn't because Titans count of getOpenTransactions is not refreshed fast enough so it fails. I have tried to fix this at our level but can't so I will be fixing it within titan.

Relations with resources as single VarAdmin

Analytics

The following var is interpreted as Relation and passed as a whole, without decomposing to properties:

$rel1 (happening: $b, protagonist: $p) isa event-protagonist has role "parent";

Resources do not merge properly

Analytics

Yesterday (2016-11-29) I noticed in the genealogy graph some multiple copies of resources with the same value. If this is not handled properly, it might render Analytics almost useless. Unfortunately I do not have more details at the moment.

Shortcut edge not deleted

Graph

After deleting a has-resource relation, shortcut edge stays.

instance.relations returns 0.

instance.resource returns 1.

 

This only happens on jenkins.

failure creating hadoop grakn graph

Graph

When running analytics in a test against an independent engine instance I had started with data I get this error:

 

java.lang.UnsupportedOperationException: Cannot produce a Grakn graph using the backend ['org.apache.tinkerpop.gremlin.hadoop.structure.HadoopGraph']
    at ai.grakn.factory.TitanHadoopInternalFactory.buildGraknGraphFromTinker(TitanHadoopInternalFactory.java:49)
    at ai.grakn.factory.TitanHadoopInternalFactory.buildGraknGraphFromTinker(TitanHadoopInternalFactory.java:33)
    at ai.grakn.factory.AbstractInternalFactory.getGraph(AbstractInternalFactory.java:98)
    at ai.grakn.factory.AbstractInternalFactory.getGraph(AbstractInternalFactory.java:71)
    at ai.grakn.factory.TitanHadoopInternalFactory.getGraph(TitanHadoopInternalFactory.java:33)
    at ai.grakn.factory.AbstractInternalFactory.getGraph(AbstractInternalFactory.java:26)
    at ai.grakn.factory.SystemKeyspace.loadSystemOntology(SystemKeyspace.java:102)
    at ai.grakn.factory.FactoryBuilder.getGraknGraphFactory(FactoryBuilder.java:87)
    at ai.grakn.factory.FactoryBuilder.getFactory(FactoryBuilder.java:53)
    at ai.grakn.factory.GraknGraphFactoryPersistent.configureGraphFactory(GraknGraphFactoryPersistent.java:111)
    at ai.grakn.factory.GraknGraphFactoryPersistent.getGraphComputer(GraknGraphFactoryPersistent.java:77)
    at ai.grakn.graql.internal.query.analytics.AbstractComputeQuery.getGraphComputer(AbstractComputeQuery.java:141)
    at ai.grakn.graql.internal.query.analytics.CountQueryImpl.execute(CountQueryImpl.java:43)
    at ai.grakn.test.graql.analytics.DebugAnalyticsTest.testSlowMethod(DebugAnalyticsTest.java:17)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
    at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
    at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
    at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
    at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
    at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
    at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
    at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
    at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
    at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
    at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
    at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
    at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
    at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
    at org.junit.runner.JUnitCore.run(JUnitCore.java:160)
    at com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:78)
    at com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:212)
    at com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:68)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at com.intellij.rt.execution.application.AppMain.main(AppMain.java:140)

 

The test is in the branch bug-panic-room on my fork called testSlowMethod() - just remember to start up engine before running the test.

bulk persist fails due to validation

Analytics

when working in the panic room - discovered that there is no sleep when trying to bulk persist resources. This is believed to be the cause behind degrees and cluster failing when persisting the results. There are essentially uniqueness violations causing failures when committing.

 

The fix seems to have been making the thread sleep between retries. This is currently only in the panic room branch but should be implemented properly in the master code base with an exponential backoff and retry.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.