Git Product home page Git Product logo

apache / incubator-hugegraph Goto Github PK

View Code? Open in Web Editor NEW
2.5K 122.0 499.0 25.62 MB

A graph database that supports more than 100+ billion data, high performance and scalability (Include OLTP Engine & REST-API & Backends)

Home Page: https://hugegraph.apache.org

License: Apache License 2.0

Shell 1.53% Java 98.32% Groovy 0.11% Dockerfile 0.04%
database graph-database graph gremlin big-data graphdb

incubator-hugegraph's Introduction

hugegraph-logo

A graph database that supports more than 10 billion data, high performance and scalability

License HugeGraph-CI License checker GitHub Releases Downloads

What is Apache HugeGraph?

HugeGraph is a fast-speed and highly-scalable graph database. Billions of vertices and edges can be easily stored into and queried from HugeGraph due to its excellent OLTP ability. As compliance to Apache TinkerPop 3 framework, various complicated graph queries can be achieved through Gremlin(a powerful graph traversal language).

Features

  • Compliance to Apache TinkerPop 3, support Gremlin & Cypher language
  • Schema Metadata Management, including VertexLabel, EdgeLabel, PropertyKey and IndexLabel
  • Multi-type Indexes, supporting exact query, range query and complex conditions combination query
  • Plug-in Backend Store Driver Framework, support RocksDB, Cassandra, HBase, ScyllaDB, and MySQL/Postgre now and easy to add another backend store driver if needed
  • Integration with Flink/Spark/HDFS, and friendly to connect other big data platforms

Quick Start

1. Docker Way (Convenient for Test)

We can use docker run -itd --name=graph -p 8080:8080 hugegraph/hugegraph to quickly start an inner HugeGraph server with RocksDB (in backgrounds) for test/dev. You can visit doc page or the README for more details. (Docker Compose)

Note:

  1. The docker image of hugegraph is a convenience release, but not official distribution artifacts. You can find more details from ASF Release Distribution Policy.

  2. Recommend to use release tag(like 1.2.0) for the stable version. Use latest tag to experience the newest functions in development.

2. Download Way

Visit Download Page and refer the doc to download the latest release package and start the server.

3. Source Building Way

Visit Source Building Page and follow the steps to build the source code and start the server.

The project doc page contains more information on HugeGraph and provides detailed documentation for users. (Structure / Usage / API / Configs...)

And here are links of other HugeGraph component/repositories:

  1. hugegraph-toolchain (graph tools loader/dashboard/tool/client)
  2. hugegraph-computer (integrated graph computing system)
  3. hugegraph-commons (common & rpc libs)
  4. hugegraph-website (doc & website code)
  5. hugegraph-ai (integrated Graph AI/LLM/KG system)

License

HugeGraph is licensed under Apache 2.0 License.

Contributing

  • Welcome to contribute to HugeGraph, please see How to Contribute & Guidelines for more information.
  • Note: It's recommended to use GitHub Desktop to greatly simplify the PR and commit process.
  • Thank you to all the people who already contributed to HugeGraph!

contributors graph

Thanks

HugeGraph relies on the TinkerPop framework, we refer to the storage structure of Titan and the schema definition of DataStax. Thanks to TinkerPop, thanks to Titan, thanks to DataStax. Thanks to all other organizations or authors who contributed to the project.

You are welcome to contribute to HugeGraph, and we are looking forward to working with you to build an excellent open-source community.

Contact Us

  • GitHub Issues: Feedback on usage issues and functional requirements (quick response)
  • Feedback Email: [email protected] (subscriber only)
  • WeChat public account: Apache HugeGraph, welcome to scan this QR code to follow us.

QR png

incubator-hugegraph's People

Contributors

aroundabout avatar coderzc avatar conghuhu avatar corgiboygsj avatar danguge avatar dependabot[bot] avatar freehackofjeff avatar houzhizhen avatar imbajin avatar jackyyangpassion avatar jadepeng avatar javeme avatar linary avatar littlestonelover avatar liuxiaocs7 avatar lxb1111 avatar msgui avatar nanke666 avatar pengzna avatar seagle-yuan avatar simon824 avatar sunnyboy-wyh avatar vgalaxies avatar wangyao2016 avatar xuliguov5 avatar z-huant avatar z7658329 avatar zhoney avatar zony7 avatar zyxxoo avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

incubator-hugegraph's Issues

init-store failed due to datapath and walpath error for rocksdb backend

Initing HugeGraph Store...
2018-08-22 16:54:58 718 [main] [INFO ] com.baidu.hugegraph.cmd.InitStore [] - Init graph with config file: conf/hugegraph.properties
2018-08-22 16:54:58 819 [main] [INFO ] com.baidu.hugegraph.HugeGraph [] - Opening backend store 'rocksdb' for graph 'hugegraph'
2018-08-22 16:54:58 865 [main] [INFO ] com.baidu.hugegraph.backend.store.rocksdb.RocksDBStore [] - Opening RocksDB with data path: ./rocksdb-data/schema
2018-08-22 16:54:59 1043 [main] [INFO ] com.baidu.hugegraph.backend.store.rocksdb.RocksDBStore [] - Failed to open RocksDB './rocksdb-data/schema' with database 'hugegraph', try to init CF later
2018-08-22 16:54:59 1160 [main] [INFO ] com.baidu.hugegraph.backend.store.rocksdb.RocksDBStore [] - Opening RocksDB with data path: ./rocksdb-data/system
2018-08-22 16:54:59 1217 [main] [INFO ] com.baidu.hugegraph.backend.store.rocksdb.RocksDBStore [] - Failed to open RocksDB './rocksdb-data/system' with database 'hugegraph', try to init CF later
2018-08-22 16:54:59 1276 [main] [INFO ] com.baidu.hugegraph.backend.store.rocksdb.RocksDBStore [] - Opening RocksDB with data path: ./rocksdb-data/graph
2018-08-22 16:54:59 1325 [main] [INFO ] com.baidu.hugegraph.backend.store.rocksdb.RocksDBStore [] - Failed to open RocksDB './rocksdb-data/graph' with database 'hugegraph', try to init CF later
2018-08-22 16:54:59 1409 [main] [INFO ] com.baidu.hugegraph.backend.store.rocksdb.RocksDBStore [] - Opening RocksDB with data path: ./edge_in/graph
2018-08-22 16:54:59 1411 [main] [ERROR] com.baidu.hugegraph.backend.store.rocksdb.RocksDBStore [] - Failed to open RocksDB './edge_in/graph'
at com.baidu.hugegraph.backend.store.rocksdb.RocksDBStdSessions.(RocksDBStdSessions.java:113) ~[hugegraph-rocksdb-0.7.4.jar:?]
at com.baidu.hugegraph.backend.store.rocksdb.RocksDBStore.openSessionPool(RocksDBStore.java:206) ~[hugegraph-rocksdb-0.7.4.jar:?]
at com.baidu.hugegraph.backend.store.rocksdb.RocksDBStore.open(RocksDBStore.java:154) [hugegraph-rocksdb-0.7.4.jar:?]
at com.baidu.hugegraph.backend.store.rocksdb.RocksDBStore.open(RocksDBStore.java:143) [hugegraph-rocksdb-0.7.4.jar:?]
at com.baidu.hugegraph.HugeGraph.initBackend(HugeGraph.java:187) [hugegraph-core-0.7.4.jar:0.7.4.0]
at com.baidu.hugegraph.cmd.InitStore.initBackend(InitStore.java:115) [hugegraph-dist-0.7.4.jar:?]
at com.baidu.hugegraph.cmd.InitStore.initGraph(InitStore.java:103) [hugegraph-dist-0.7.4.jar:?]
at com.baidu.hugegraph.cmd.InitStore.main(InitStore.java:86) [hugegraph-dist-0.7.4.jar:?]
Exception in thread "main" com.baidu.hugegraph.backend.BackendException: Failed to open RocksDB './edge_in/graph'
at com.baidu.hugegraph.backend.store.rocksdb.RocksDBStore.open(RocksDBStore.java:186)
at com.baidu.hugegraph.backend.store.rocksdb.RocksDBStore.open(RocksDBStore.java:143)
at com.baidu.hugegraph.HugeGraph.initBackend(HugeGraph.java:187)
at com.baidu.hugegraph.cmd.InitStore.initBackend(InitStore.java:115)
at com.baidu.hugegraph.cmd.InitStore.initGraph(InitStore.java:103)
at com.baidu.hugegraph.cmd.InitStore.main(InitStore.java:86)
Caused by: org.rocksdb.RocksDBException: While lock file: ./rocksdb-data/graph/LOCK: No locks available
at org.rocksdb.RocksDB.open(Native Method)
at org.rocksdb.RocksDB.open(RocksDB.java:286)
at com.baidu.hugegraph.backend.store.rocksdb.RocksDBStdSessions.(RocksDBStdSessions.java:113)
at com.baidu.hugegraph.backend.store.rocksdb.RocksDBStore.openSessionPool(RocksDBStore.java:206)
at com.baidu.hugegraph.backend.store.rocksdb.RocksDBStore.open(RocksDBStore.java:154)
... 5 more

HugeGraph(0.7.4),后端存储用scyllaDB(1.7版本),初始化的失败(“Indexes are not supported yet”)

Expected behavior 期望表现

{type something here...}

Actual behavior 实际表现

{type something here...}

Steps to reproduce the problem 复现步骤

  1. {step 1}
  2. {step 2}
  3. {step 3}

Status of loaded data 数据状态

Vertex/Edge summary 数据量

  • loaded vertices amount: {like 10 million}
  • loaded edges amount: {like 20 million}
  • loaded time: {like 200s}

Vertex/Edge example 数据示例

{type something here...}

Schema(VertexLabel, EdgeLabel, IndexLabel) 元数据结构

{type something here...}

Specifications of environment 环境信息

  • hugegraph version: {like v0.7.4}
  • operating system: {like centos 7.4, 32 CPUs, 64G RAM}
  • hugegraph backend: {like cassandra 3.10, cluster with 20 nodes, 3 x 1TB HDD disk each node}

配置文件自定义Rest接口批量导入顶点个数,batch.max_edges_per_batch=1000,重启后后台处理异常

hugegraph-0.7.4/conf/hugegraph.properties配置文件, 增加如下配置:
batch.max_edges_per_batch=1000
batch.max_vertices_per_batch=1000
将顶点数据和边数据按照1000进行批量导入,Rest服务器直接返回500错误,整型和字符串转换失败:
b'{"exception":"class java.lang.ClassCastException","message":"java.lang.String cannot be cast to java.lang.Integer","cause":""}'

loader(0.7.0)导入数据出错,后端是scylladb

使用loader的example目录下文件导入测试,出现如下错误:

Exception in thread "main" java.lang.reflect.UndeclaredThrowableException
at com.sun.proxy.$Proxy52.create(Unknown Source)
at com.baidu.hugegraph.structure.schema.SchemaBuilder$create.call(Unknown Source)
at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:48)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:113)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:117)
at Script1.run(Script1.groovy:15)
at com.baidu.hugegraph.loader.executor.GroovyExecutor.execute(GroovyExecutor.java:62)
at com.baidu.hugegraph.loader.HugeGraphLoader.createSchema(HugeGraphLoader.java:161)
at com.baidu.hugegraph.loader.HugeGraphLoader.load(HugeGraphLoader.java:104)
at com.baidu.hugegraph.loader.HugeGraphLoader.main(HugeGraphLoader.java:68)
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at com.baidu.hugegraph.structure.schema.BuilderProxy.invoke(BuilderProxy.java:56)
... 10 more
Caused by: class java.lang.IllegalArgumentException: Not all index fields '[city]' are contained in schema properties '[age, name, addr, weight]'
at com.baidu.hugegraph.exception.ServerException.fromResponse(ServerException.java:44)
at com.baidu.hugegraph.client.RestClient.checkStatus(RestClient.java:63)
at com.baidu.hugegraph.rest.RestClient.post(RestClient.java:119)
at com.baidu.hugegraph.rest.RestClient.post(RestClient.java:79)
at com.baidu.hugegraph.rest.RestClient.post(RestClient.java:74)
at com.baidu.hugegraph.api.schema.IndexLabelAPI.create(IndexLabelAPI.java:43)
at com.baidu.hugegraph.driver.SchemaManager.addIndexLabel(SchemaManager.java:147)
at com.baidu.hugegraph.structure.schema.IndexLabel$BuilderImpl.create(IndexLabel.java:125)
at com.baidu.hugegraph.structure.schema.IndexLabel$BuilderImpl.create(IndexLabel.java:108)
... 15 more

schema.groovy 配置文件信息如下:

schema.propertyKey("name").asText().ifNotExist().create();
schema.propertyKey("age").asInt().ifNotExist().create();
schema.propertyKey("city").asText().ifNotExist().create();
schema.propertyKey("weight").asDouble().ifNotExist().create();
schema.propertyKey("lang").asText().ifNotExist().create();
schema.propertyKey("date").asText().ifNotExist().create();
schema.propertyKey("price").asDouble().ifNotExist().create();

schema.vertexLabel("person").properties("name", "age", "city").primaryKeys("name").ifNotExist().create();
schema.vertexLabel("software").properties("name", "lang", "price").primaryKeys("name").ifNotExist().create();

schema.indexLabel("personByName").onV("person").by("name").secondary().ifNotExist().create();
schema.indexLabel("personByAge").onV("person").by("age").range().ifNotExist().create();
schema.indexLabel("personByCity").onV("person").by("city").secondary().ifNotExist().create();
schema.indexLabel("personByAgeAndCity").onV("person").by("age", "city").secondary().ifNotExist().create();
schema.indexLabel("softwareByPrice").onV("software").by("price").range().ifNotExist().create();

schema.edgeLabel("knows").sourceLabel("person").targetLabel("person").properties("date", "weight").ifNotExist().create();
schema.edgeLabel("created").sourceLabel("person").targetLabel("software").properties("date", "weight").ifNotExist().create();

schema.indexLabel("createdByDate").onE("created").by("date").secondary().ifNotExist().create();
schema.indexLabel("createdByWeight").onE("created").by("weight").range().ifNotExist().create();
schema.indexLabel("knowsByWeight").onE("knows").by("weight").range().ifNotExist().create()

将index部分删除后,执行导入依然报错:
`schema.propertyKey("name").asText().ifNotExist().create();
schema.propertyKey("age").asInt().ifNotExist().create();
schema.propertyKey("city").asText().ifNotExist().create();
schema.propertyKey("weight").asDouble().ifNotExist().create();
schema.propertyKey("lang").asText().ifNotExist().create();
schema.propertyKey("date").asText().ifNotExist().create();
schema.propertyKey("price").asDouble().ifNotExist().create();

schema.vertexLabel("person").properties("name", "age", "city").primaryKeys("name").ifNotExist().create();
schema.vertexLabel("software").properties("name", "lang", "price").primaryKeys("name").ifNotExist().create();

schema.edgeLabel("knows").sourceLabel("person").targetLabel("person").properties("date", "weight").ifNotExist().create();
schema.edgeLabel("created").sourceLabel("person").targetLabel("software").properties("date", "weight").ifNotExist().create();`

Vertices has been imported:Exception in thread "main" java.lang.IllegalStateException: The id field can't be empty or null when id strategy is CUSTOMIZE
at com.google.common.base.Preconditions.checkState(Preconditions.java:199)
at com.baidu.hugegraph.util.E.checkState(E.java:68)
at com.baidu.hugegraph.loader.parser.VertexParser.checkIdField(VertexParser.java:103)
at com.baidu.hugegraph.loader.parser.VertexParser.(VertexParser.java:43)
at com.baidu.hugegraph.loader.HugeGraphLoader.loadVertices(HugeGraphLoader.java:169)
at com.baidu.hugegraph.loader.HugeGraphLoader.load(HugeGraphLoader.java:111)
at com.baidu.hugegraph.loader.HugeGraphLoader.main(HugeGraphLoader.java:68)

但scheme中并没有将VertexId使用 CUSTOMIZE 策略。

期待回复~

有没有能够限制hugegraph-server内存的地方,比如配置文件或者代码里面

Expected behavior 期望表现

能够自行限制server的内存

Actual behavior 实际表现

{type something here...}

Steps to reproduce the problem 复现步骤

  1. {step 1}
  2. {step 2}
  3. {step 3}

Status of loaded data 数据状态

Vertex/Edge summary 数据量

  • loaded vertices amount: {like 10 million}
  • loaded edges amount: {like 20 million}
  • loaded time: {like 200s}

Vertex/Edge example 数据示例

{type something here...}

Schema(VertexLabel, EdgeLabel, IndexLabel) 元数据结构

{type something here...}

Specifications of environment 环境信息

  • hugegraph version: {like v0.7.4}
  • operating system: {like centos 7.4, 32 CPUs, 64G RAM}
  • hugegraph backend: {like cassandra 3.10, cluster with 20 nodes, 3 x 1TB HDD disk each node}

执行 HugeGraphSpark 报错

我正在尝试按照文档 Quick Start 执行一遍 HugeGraphSpark 的 Demo :
https://hugegraph.github.io/hugegraph-doc/quickstart/hugegraph-spark.html

Expected behavior 期望表现

--

Actual behavior 实际表现

开启 spark shell 在执行到

val graph = sc.hugeGraph("test")

时报错 :
image

Steps to reproduce the problem 复现步骤

https://hugegraph.github.io/hugegraph-doc/quickstart/hugegraph-spark.html

Status of loaded data 数据状态

已导入 , 可正常查询

Vertex/Edge summary 数据量

  • loaded vertices amount: < 10000
  • loaded edges amount: <100000
  • loaded time: < 5 min

Specifications of environment 环境信息

  • hugegraph version: v0.7.4
  • operating system: centos 7.4
  • hugegraph backend: hbase
  • HugeGraphSpark version : 0.6.1
  • spark version : 2.3.1

1.restapi 有没有返回超时时长的设置2.restapi访问时产生的日志记录在哪

Expected behavior 期望表现

{type something here...}

Actual behavior 实际表现

{type something here...}

Steps to reproduce the problem 复现步骤

  1. {step 1}
  2. {step 2}
  3. {step 3}

Status of loaded data 数据状态

Vertex/Edge summary 数据量

  • loaded vertices amount: {like 10 million}
  • loaded edges amount: {like 20 million}
  • loaded time: {like 200s}

Vertex/Edge example 数据示例

{type something here...}

Schema(VertexLabel, EdgeLabel, IndexLabel) 元数据结构

{type something here...}

Specifications of environment 环境信息

  • hugegraph version: {like v0.7.4}
  • operating system: {like centos 7.4, 32 CPUs, 64G RAM}
  • hugegraph backend: {like cassandra 3.10, cluster with 20 nodes, 3 x 1TB HDD disk each node}

关于技术交流群

我们建议大家使用Issues来反馈使用HugeGraph时遇到的问题,方便记录及共享信息。

国内开发者习惯通过微信群进行交流,也创建了微信技术交流群,请关注微信公众号HugeGraph进行留言,加入HugeGraph技术交流群。

adapt to tinkerpop tests for mysql backend

  1. multi-source vertex-step problem also exists, filter it.
  2. mysql not supporting update vertex/edge property leads to using add vertex/edge to update property, which result in bug tx having 'added vertices' when update properties. fixed.
  3. multi-thread tx commit not work. Reset mode to auto-commit after batch commit to fix

hugegraph对常见图算法有没有支持,比如pagerank,louvain,label propagation

Expected behavior 期望表现

{type something here...}

Actual behavior 实际表现

{type something here...}

Steps to reproduce the problem 复现步骤

  1. {step 1}
  2. {step 2}
  3. {step 3}

Status of loaded data 数据状态

Vertex/Edge summary 数据量

  • loaded vertices amount: {like 10 million}
  • loaded edges amount: {like 20 million}
  • loaded time: {like 200s}

Vertex/Edge example 数据示例

{type something here...}

Schema(VertexLabel, EdgeLabel, IndexLabel) 元数据结构

{type something here...}

Specifications of environment 环境信息

  • hugegraph version: {like v0.7.4}
  • operating system: {like centos 7.4, 32 CPUs, 64G RAM}
  • hugegraph backend: {like cassandra 3.10, cluster with 20 nodes, 3 x 1TB HDD disk each node}

导入数据失败

Expected behavior 期望表现

期望hugegraph-loader导入数据正常

Actual behavior 实际表现

实际导入数据报错

Steps to reproduce the problem 复现步骤

下载 hugegraph-loader 0.7解压后运行(hugegraph-server运行正常,也实现执行过init-store)按照官方文档
运行的命令:bin/hugegraph-loader -g hugegraph -f example/struct.json -s example/schema.groovy
报错:
Exception in thread "main" null: <!doctype html><title>HTTP Status 404 – Not Found</title><style type="text/css">h1 {font-family:Tahoma,Arial,sans-serif;color:white;background-color:#525D76;font-size:22px;} h2 {font-family:Tahoma,Arial,sans-serif;color:white;background-color:#525D76;font-size:16px;} h3 {font-family:Tahoma,Arial,sans-serif;color:white;background-color:#525D76;font-size:14px;} body {font-family:Tahoma,Arial,sans-serif;color:black;background-color:white;} b {font-family:Tahoma,Arial,sans-serif;color:white;background-color:#525D76;} p {font-family:Tahoma,Arial,sans-serif;background:white;color:black;font-size:12px;} a {color:black;} a.name {color:black;} .line {height:1px;background-color:#525D76;border:none;}</style>

HTTP Status 404 – Not Found


Type Status Report

Message /versions

Description The origin server did not find a current representation for the target resource or is not willing to disclose that one exists.


Apache Tomcat/8.5.13


at com.baidu.hugegraph.exception.ServerException.fromResponse(ServerException.java:44)
at com.baidu.hugegraph.client.RestClient.checkStatus(RestClient.java:63)
at com.baidu.hugegraph.rest.RestClient.get(RestClient.java:150)
at com.baidu.hugegraph.api.version.VersionAPI.get(VersionAPI.java:41)
at com.baidu.hugegraph.driver.VersionManager.getApiVersion(VersionManager.java:45)
at com.baidu.hugegraph.driver.HugeClient.checkServerApiVersion(HugeClient.java:101)
at com.baidu.hugegraph.driver.HugeClient.initManagers(HugeClient.java:84)
at com.baidu.hugegraph.driver.HugeClient.(HugeClient.java:59)
at com.baidu.hugegraph.loader.executor.HugeClients.newHugeClient(HugeClients.java:43)
at com.baidu.hugegraph.loader.executor.HugeClients.get(HugeClients.java:33)
at com.baidu.hugegraph.loader.HugeGraphLoader.createSchema(HugeGraphLoader.java:151)
at com.baidu.hugegraph.loader.HugeGraphLoader.load(HugeGraphLoader.java:104)
at com.baidu.hugegraph.loader.HugeGraphLoader.main(HugeGraphLoader.java:68)
root@personalserver:~/hugegraph-loader-0.7.0#

HugeGraph-Studio(0.7.0)启动失败

配置项:
image

但执行启动命令后,日志如下:

14:33:24.260 [main] DEBUG com.baidu.hugegraph.config.OptionSpace ID: TS: - Registered options for OptionHolder: StudioServerOptions
14:33:24.298 [main] WARN com.baidu.hugegraph.config.HugeConfig ID: TS: - The config option 'graph.server.host' is redundant, please ensure it has been registered
14:33:24.298 [main] WARN com.baidu.hugegraph.config.HugeConfig ID: TS: - The config option 'graph.server.port' is redundant, please ensure it has been registered
14:33:24.298 [main] WARN com.baidu.hugegraph.config.HugeConfig ID: TS: - The config option 'graph.name' is redundant, please ensure it has been registered
14:33:24.298 [main] WARN com.baidu.hugegraph.config.HugeConfig ID: TS: - The config option 'data.base_directory' is redundant, please ensure it has been registered
14:33:24.298 [main] WARN com.baidu.hugegraph.config.HugeConfig ID: TS: - The config option 'show.limit.data' is redundant, please ensure it has been registered
14:33:24.298 [main] WARN com.baidu.hugegraph.config.HugeConfig ID: TS: - The config option 'show.limit.edge.total' is redundant, please ensure it has been registered
14:33:24.299 [main] WARN com.baidu.hugegraph.config.HugeConfig ID: TS: - The config option 'show.limit.edge.increment' is redundant, please ensure it has been registered
14:33:24.299 [main] WARN com.baidu.hugegraph.config.HugeConfig ID: TS: - The config option 'gremlin.limit_suffix' is redundant, please ensure it has been registered
14:33:24.299 [main] WARN com.baidu.hugegraph.config.HugeConfig ID: TS: - The config option 'vertex.vis.font.color' is redundant, please ensure it has been registered
14:33:24.299 [main] WARN com.baidu.hugegraph.config.HugeConfig ID: TS: - The config option 'vertex.vis.font.size' is redundant, please ensure it has been registered
14:33:24.299 [main] WARN com.baidu.hugegraph.config.HugeConfig ID: TS: - The config option 'vertex.vis.size' is redundant, please ensure it has been registered
14:33:24.299 [main] WARN com.baidu.hugegraph.config.HugeConfig ID: TS: - The config option 'vertex.vis.scaling.min' is redundant, please ensure it has been registered
14:33:24.299 [main] WARN com.baidu.hugegraph.config.HugeConfig ID: TS: - The config option 'vertex.vis.scaling.max' is redundant, please ensure it has been registered
14:33:24.299 [main] WARN com.baidu.hugegraph.config.HugeConfig ID: TS: - The config option 'vertex.vis.shape' is redundant, please ensure it has been registered
14:33:24.300 [main] WARN com.baidu.hugegraph.config.HugeConfig ID: TS: - The config option 'vertex.vis.color' is redundant, please ensure it has been registered
14:33:24.300 [main] WARN com.baidu.hugegraph.config.HugeConfig ID: TS: - The config option 'edge.vis.color.default' is redundant, please ensure it has been registered
14:33:24.300 [main] WARN com.baidu.hugegraph.config.HugeConfig ID: TS: - The config option 'edge.vis.color.hover' is redundant, please ensure it has been registered
14:33:24.300 [main] WARN com.baidu.hugegraph.config.HugeConfig ID: TS: - The config option 'edge.vis.color.highlight' is redundant, please ensure it has been registered
14:33:24.300 [main] WARN com.baidu.hugegraph.config.HugeConfig ID: TS: - The config option 'edge.vis.font.color' is redundant, please ensure it has been registered
14:33:24.300 [main] WARN com.baidu.hugegraph.config.HugeConfig ID: TS: - The config option 'edge.vis.font.size' is redundant, please ensure it has been registered
Sep 06, 2018 2:33:24 PM org.apache.coyote.AbstractProtocol init
INFO: Initializing ProtocolHandler ["http-nio-192.168.3.31-8088"]
Sep 06, 2018 2:33:24 PM org.apache.tomcat.util.net.NioSelectorPool getSharedSelector
INFO: Using a shared selector for servlet write/read
Sep 06, 2018 2:33:24 PM org.apache.catalina.core.StandardService startInternal
INFO: Starting service Tomcat
Sep 06, 2018 2:33:24 PM org.apache.catalina.core.StandardEngine startInternal
INFO: Starting Servlet Engine: Apache Tomcat/8.5.2
Sep 06, 2018 2:33:24 PM org.apache.catalina.startup.ContextConfig getDefaultWebXmlFragment
INFO: No global web.xml found
Sep 06, 2018 2:33:26 PM org.apache.jasper.servlet.TldScanner scanJars
INFO: At least one JAR was scanned for TLDs yet contained no TLDs. Enable debug logging for this logger for a complete list of JARs that were scanned but no TLDs were found in them. Skipping unneeded JARs during scanning can improve startup time and JSP compilation time.
Sep 06, 2018 2:33:29 PM org.apache.jasper.servlet.TldScanner scanJars
INFO: At least one JAR was scanned for TLDs yet contained no TLDs. Enable debug logging for this logger for a complete list of JARs that were scanned but no TLDs were found in them. Skipping unneeded JARs during scanning can improve startup time and JSP compilation time.
Sep 06, 2018 2:33:29 PM org.apache.catalina.core.ApplicationContext log
INFO: Spring WebApplicationInitializers detected on classpath: [org.glassfish.jersey.server.spring.SpringWebApplicationInitializer@64d936fe]
Sep 06, 2018 2:33:29 PM org.apache.catalina.core.ApplicationContext log
INFO: Initializing Spring root WebApplicationContext
Sep 06, 2018 2:33:29 PM org.springframework.web.context.ContextLoader initWebApplicationContext
INFO: Root WebApplicationContext: initialization started
Sep 06, 2018 2:33:29 PM org.springframework.context.support.AbstractApplicationContext prepareRefresh
INFO: Refreshing Root WebApplicationContext: startup date [Thu Sep 06 14:33:29 CST 2018]; root of context hierarchy
Sep 06, 2018 2:33:29 PM org.springframework.beans.factory.xml.XmlBeanDefinitionReader loadBeanDefinitions
INFO: Loading XML bean definitions from class path resource [applicationContext.xml]
Sep 06, 2018 2:33:30 PM org.springframework.beans.factory.annotation.AutowiredAnnotationBeanPostProcessor
INFO: JSR-330 'javax.inject.Inject' annotation found and supported for autowiring
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/root/hugegraph_studio/hugegraph-studio-0.7.0/lib/log4j-slf4j-impl-2.8.2.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/root/hugegraph_studio/hugegraph-studio-0.7.0/bin/tomcat.8088/webapps/api/WEB-INF/lib/log4j-slf4j-impl-2.8.2.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
14:33:30.702 [192.168.3.31-startStop-1] DEBUG com.baidu.hugegraph.config.OptionSpace ID: TS: - Registered options for OptionHolder: StudioApiOptions
14:33:30.736 [192.168.3.31-startStop-1] WARN com.baidu.hugegraph.config.HugeConfig ID: TS: - The config option 'studio.server.port' is redundant, please ensure it has been registered
14:33:30.737 [192.168.3.31-startStop-1] WARN com.baidu.hugegraph.config.HugeConfig ID: TS: - The config option 'studio.server.host' is redundant, please ensure it has been registered
14:33:30.737 [192.168.3.31-startStop-1] WARN com.baidu.hugegraph.config.HugeConfig ID: TS: - The config option 'studio.server.ui' is redundant, please ensure it has been registered
14:33:30.737 [192.168.3.31-startStop-1] WARN com.baidu.hugegraph.config.HugeConfig ID: TS: - The config option 'studio.server.api.war' is redundant, please ensure it has been registered
14:33:30.738 [192.168.3.31-startStop-1] INFO com.baidu.hugegraph.studio.board.serializer.BoardSerializer ID: TS: - The board file path is: /root/.hugegraph-studio/board
Sep 06, 2018 2:33:30 PM org.springframework.web.context.ContextLoader initWebApplicationContext
INFO: Root WebApplicationContext: initialization completed in 1023 ms
Sep 06, 2018 2:33:31 PM org.apache.coyote.AbstractProtocol start
INFO: Starting ProtocolHandler [http-nio-192.168.3.31-8088]
14:33:31.337 [main] INFO com.baidu.hugegraph.studio.HugeGraphStudio ID: TS: - HugeGraphStudio is now running on: http://192.168.3.31:8088


执行 lsof -i:8088 发现此端口并没有被服务所拥有。

hasLabel(label...),多个label时失败

Expected behavior 期望表现

正确查询,满足一个label即可返回

Actual behavior 实际表现

java.lang.ClassCastException: java.lang.String cannot be cast to com.baidu.hugegraph.backend.id.Id at com.baidu.hugegraph.backend.tx.GraphTransaction.optimizeQuery(GraphTransaction.java:892) ~[hugegraph-core-0.7.4.jar:0.7.4.0] at com.baidu.hugegraph.backend.tx.GraphTransaction.query(GraphTransaction.java:292) ~[hugegraph-core-0.7.4.jar:0.7.4.0] at com.baidu.hugegraph.backend.tx.GraphTransaction.queryVerticesFromBackend(GraphTransaction.java:452) ~[hugegraph-core-0.7.4.jar:0.7.4.0] at com.baidu.hugegraph.backend.tx.GraphTransaction.queryVertices(GraphTransaction.java:443) ~[hugegraph-core-0.7.4.jar:0.7.4.0] at com.baidu.hugegraph.backend.cache.CachedGraphTransaction.queryVertices(CachedGraphTransaction.java:93) ~[hugegraph-core-0.7.4.jar:0.7.4.0] at com.baidu.hugegraph.HugeGraph.vertices(HugeGraph.java:337) ~[hugegraph-core-0.7.4.jar:0.7.4.0] at com.baidu.hugegraph.traversal.optimize.HugeGraphStep.vertices(HugeGraphStep.java:94) ~[hugegraph-core-0.7.4.jar:0.7.4.0] at com.baidu.hugegraph.traversal.optimize.HugeGraphStep.lambda$new$0(HugeGraphStep.java:66) ~[hugegraph-core-0.7.4.jar:0.7.4.0] at org.apache.tinkerpop.gremlin.process.traversal.step.map.GraphStep.processNextStart(GraphStep.java:139) ~[gremlin-core-3.2.5.jar:3.2.5] at org.apache.tinkerpop.gremlin.process.traversal.step.util.AbstractStep.hasNext(AbstractStep.java:143) ~[gremlin-core-3.2.5.jar:3.2.5] at org.apache.tinkerpop.gremlin.process.traversal.step.util.ExpandableStepIterator.next(ExpandableStepIterator.java:50) ~[gremlin-core-3.2.5.jar:3.2.5] at org.apache.tinkerpop.gremlin.process.traversal.step.filter.FilterStep.processNextStart(FilterStep.java:37) ~[gremlin-core-3.2.5.jar:3.2.5] at org.apache.tinkerpop.gremlin.process.traversal.step.util.AbstractStep.hasNext(AbstractStep.java:143) ~[gremlin-core-3.2.5.jar:3.2.5] at org.apache.tinkerpop.gremlin.process.traversal.util.DefaultTraversal.hasNext(DefaultTraversal.java:192) ~[gremlin-core-3.2.5.jar:3.2.5] at org.apache.tinkerpop.gremlin.util.iterator.IteratorUtils.fill(IteratorUtils.java:62) ~[gremlin-core-3.2.5.jar:3.2.5] at org.apache.tinkerpop.gremlin.util.iterator.IteratorUtils.list(IteratorUtils.java:85) ~[gremlin-core-3.2.5.jar:3.2.5] at org.apache.tinkerpop.gremlin.util.iterator.IteratorUtils.asList(IteratorUtils.java:382) ~[gremlin-core-3.2.5.jar:3.2.5] at org.apache.tinkerpop.gremlin.server.handler.HttpGremlinEndpointHandler.lambda$channelRead$1(HttpGremlinEndpointHandler.java:239) ~[gremlin-server-3.2.5.jar:3.2.5] at org.apache.tinkerpop.gremlin.util.function.FunctionUtils.lambda$wrapFunction$0(FunctionUtils.java:36) ~[gremlin-core-3.2.5.jar:3.2.5] at org.apache.tinkerpop.gremlin.groovy.engine.GremlinExecutor.lambda$eval$2(GremlinExecutor.java:320) ~[gremlin-groovy-3.2.5.jar:3.2.5] at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_181] at com.baidu.hugegraph.auth.HugeGraphAuthProxy$ContextTask.run(HugeGraphAuthProxy.java:290) [hugegraph-api-0.7.4.jar:0.27.0.0] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_181] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_181] at java.lang.Thread.run(Thread.java:748) [?:1.8.0_181]

Steps to reproduce the problem 复现步骤

g.V().hasLabel('person', 'software')

Status of loaded data 数据状态

Vertex/Edge summary 数据量

Vertex/Edge example 数据示例

Schema(VertexLabel, EdgeLabel, IndexLabel) 元数据结构

Specifications of environment 环境信息

  • hugegraph version: v0.7.4

hugeclient连接报错 Failed to do request

使用curl命令和hugestudio连接完全没问题,在idea中使用hugeclient示例代码连接直接报错
连接代码:
HugeClient hugeClient = new HugeClient("http://localhost:8080", "hugegraph");
报错信息
Exception in thread "main" com.baidu.hugegraph.rest.ClientException: Failed to do request
at com.baidu.hugegraph.rest.RestClient.request(RestClient.java:69)
at com.baidu.hugegraph.rest.RestClient.get(RestClient.java:147)
at com.baidu.hugegraph.api.version.VersionAPI.get(VersionAPI.java:41)
at com.baidu.hugegraph.driver.VersionManager.getApiVersion(VersionManager.java:45)
at com.baidu.hugegraph.driver.HugeClient.checkServerApiVersion(HugeClient.java:101)
at com.baidu.hugegraph.driver.HugeClient.initManagers(HugeClient.java:84)
at com.baidu.hugegraph.driver.HugeClient.(HugeClient.java:59)
at com.baidu.hugegraph.driver.HugeClient.(HugeClient.java:48)
at hugegraph.SingleExampleJava.main(SingleExampleJava.java:14)
Caused by: javax.ws.rs.ProcessingException: javax.ws.rs.core.Response$Status$Family.familyOf(I)Ljavax/ws/rs/core/Response$Status$Family;
at org.glassfish.jersey.client.ClientRuntime.invoke(ClientRuntime.java:264)
at org.glassfish.jersey.client.JerseyInvocation$1.call(JerseyInvocation.java:684)
at org.glassfish.jersey.client.JerseyInvocation$1.call(JerseyInvocation.java:681)
at org.glassfish.jersey.internal.Errors.process(Errors.java:315)
at org.glassfish.jersey.internal.Errors.process(Errors.java:297)
at org.glassfish.jersey.internal.Errors.process(Errors.java:228)
at org.glassfish.jersey.process.internal.RequestScope.runInScope(RequestScope.java:444)
at org.glassfish.jersey.client.JerseyInvocation.invoke(JerseyInvocation.java:681)
at org.glassfish.jersey.client.JerseyInvocation$Builder.method(JerseyInvocation.java:411)
at org.glassfish.jersey.client.JerseyInvocation$Builder.get(JerseyInvocation.java:311)
at com.baidu.hugegraph.rest.RestClient.lambda$get$2(RestClient.java:148)
at com.baidu.hugegraph.rest.RestClient.request(RestClient.java:67)
... 8 more
Caused by: java.lang.NoSuchMethodError: javax.ws.rs.core.Response$Status$Family.familyOf(I)Ljavax/ws/rs/core/Response$Status$Family;
at org.glassfish.jersey.message.internal.Statuses$StatusImpl.(Statuses.java:63)
at org.glassfish.jersey.message.internal.Statuses$StatusImpl.(Statuses.java:54)
at org.glassfish.jersey.message.internal.Statuses.from(Statuses.java:143)
at org.glassfish.jersey.client.internal.HttpUrlConnector._apply(HttpUrlConnector.java:397)
at org.glassfish.jersey.client.internal.HttpUrlConnector.apply(HttpUrlConnector.java:285)
at org.glassfish.jersey.client.ClientRuntime.invoke(ClientRuntime.java:255)
... 19 more

hugegraph-server 用restapi查询了多次之后,内存占用就降不下来

Expected behavior 期望表现

希望内存占用能够在不用的时候下降

Actual behavior 实际表现

比如刚启动hugegraph-server的时候内存占用3G,使用多次restapi查询之后,内存会到10g,而且不会下降,求问怎么解决

Steps to reproduce the problem 复现步骤

  1. {step 1}
  2. {step 2}
  3. {step 3}

Status of loaded data 数据状态

Vertex/Edge summary 数据量

  • loaded vertices amount: {like 10 million}
  • loaded edges amount: {like 20 million}
  • loaded time: {like 200s}

Vertex/Edge example 数据示例

{type something here...}

Schema(VertexLabel, EdgeLabel, IndexLabel) 元数据结构

{type something here...}

Specifications of environment 环境信息

  • hugegraph version: {like v0.7.4}
  • operating system: {like centos 7.4, 32 CPUs, 64G RAM}
  • hugegraph backend: {like cassandra 3.10, cluster with 20 nodes, 3 x 1TB HDD disk each node}

Lock error thrown when modifying schema and data dynamically

The schema created at the beginning can't satisfy our requirement, so we have to modify the schema to load data into the database. When we modify the schema and invoke setProperty method, the following exception thrown:

com.baidu.hugegraph.exception.ServerException: Lock [il_delete:35] is locked by other operation
	at com.baidu.hugegraph.exception.ServerException.fromResponse(ServerException.java:44) ~[hugegraph-client-1.5.8.jar:1.5.8.0]
	at com.baidu.hugegraph.client.RestClient.checkStatus(RestClient.java:63) ~[hugegraph-client-1.5.8.jar:1.5.8.0]
	at com.baidu.hugegraph.rest.RestClient.put(RestClient.java:142) ~[hugegraph-common-1.4.9.jar:1.4.9.0]
	at com.baidu.hugegraph.api.graph.VertexAPI.append(VertexAPI.java:70) ~[hugegraph-client-1.5.8.jar:1.5.8.0]
	at com.baidu.hugegraph.driver.GraphManager.appendVertexProperty(GraphManager.java:143) ~[hugegraph-client-1.5.8.jar:1.5.8.0]
	at com.baidu.hugegraph.structure.graph.Vertex.setProperty(Vertex.java:73) ~[hugegraph-client-1.5.8.jar:1.5.8.0]
	at com.baidu.hugegraph.structure.graph.Vertex.property(Vertex.java:63) ~[hugegraph-client-1.5.8.jar:1.5.8.0]
	at com.baidu.hugegraph.structure.graph.Vertex.property(Vertex.java:30) ~[hugegraph-client-1.5.8.jar:1.5.8.0]
	at com.baidu.hugedragon.repository.graphdb.hugegraph.HugeGraphElement.setProperty(HugeGraphElement.java:118) ~[classes/:?]
	... 19 more

Steps to reproduce

  • Create a vertex label
  • Add a property key
  • Add a property with property key created in last step in the vertex label
  • Set value for the new created property

Environments

  • hugegraph-client 1.5.8
  • hugegraph-core 0.7.4

restapi访问返回异常

Expected behavior 期望表现

返回正常

Actual behavior 实际表现

{"exception":"class java.io.UncheckedIOException","message":"java.io.IOException: Failed to get result within timeout, timeout=60000ms","cause":"java.io.IOException: Failed to get result within timeout, timeout=60000ms"}
这里的timeout=60000ms 是在哪里设置的,能否调大,出现这个的原因是什么
有时候就重新查询可能出席那下面两个返回
{"exception":"class java.io.UncheckedIOException","message":"org.apache.hadoop.hbase.DoNotRetryIOException: java.lang.NoClassDefFoundError: Could not initialize class org.apache.hadoop.hbase.protobuf.ProtobufUtil","cause":"org.apache.hadoop.hbase.DoNotRetryIOException: java.lang.NoClassDefFoundError: Could not initialize class org.apache.hadoop.hbase.protobuf.ProtobufUtil"},
{"exception":"class java.lang.RuntimeException","message":"java.lang.OutOfMemoryError: GC overhead limit exceeded","cause":"java.lang.OutOfMemoryError: GC overhead limit exceeded"}
能否解释一下这几个是因为什么,

Steps to reproduce the problem 复现步骤

查了一个四度的k-neighbor,数据量比较大

Status of loaded data 数据状态

Vertex/Edge summary 数据量

  • loaded vertices amount: {like 10 million}
  • loaded edges amount: {like 20 million}
  • loaded time: {like 200s}

Vertex/Edge example 数据示例

{type something here...}

Schema(VertexLabel, EdgeLabel, IndexLabel) 元数据结构

{type something here...}

Specifications of environment 环境信息

  • hugegraph version: {like v0.7.4}
  • operating system: {like centos 7.4, 32 CPUs, 64G RAM}
  • hugegraph backend: {like cassandra 3.10, cluster with 20 nodes, 3 x 1TB HDD disk each node}

scylladb作为后端时,初始化失败问题

您好,系统配置:ubuntu18.04,scylladb作为后端,java1.80_181,按指南修改huge.properties后,初始化会出现主线程异常:
Exception in thread "main" com.datastax.driver.core.exceptions.InvalidQueryException: Index support is not enabled
at com.datastax.driver.core.exceptions.InvalidQueryException.copy(InvalidQueryException.java:50)
at com.datastax.driver.core.DriverThrowables.propagateCause(DriverThrowables.java:37)
at com.datastax.driver.core.DefaultResultSetFuture.getUninterruptibly(DefaultResultSetFuture.java:245)
at com.datastax.driver.core.AbstractSession.execute(AbstractSession.java:68)
at com.baidu.hugegraph.backend.store.cassandra.CassandraSessionPool$Session.execute(CassandraSessionPool.java:202)
at com.baidu.hugegraph.backend.store.cassandra.CassandraTable.createIndex(CassandraTable.java:566)
at com.baidu.hugegraph.backend.store.cassandra.CassandraTables$Vertex.init(CassandraTables.java:269)
at com.baidu.hugegraph.backend.store.cassandra.CassandraTables$Vertex.init(CassandraTables.java:248)
at com.baidu.hugegraph.backend.store.cassandra.CassandraStore.initTables(CassandraStore.java:398)
at com.baidu.hugegraph.backend.store.cassandra.CassandraStore.init(CassandraStore.java:248)
at com.baidu.hugegraph.backend.store.AbstractBackendStoreProvider.init(AbstractBackendStoreProvider.java:86)
at com.baidu.hugegraph.HugeGraph.initBackend(HugeGraph.java:189)
at com.baidu.hugegraph.cmd.InitStore.initBackend(InitStore.java:115)
at com.baidu.hugegraph.cmd.InitStore.initGraph(InitStore.java:103)
at com.baidu.hugegraph.cmd.InitStore.main(InitStore.java:86)
Caused by: com.datastax.driver.core.exceptions.InvalidQueryException: Index support is not enabled
at com.datastax.driver.core.Responses$Error.asException(Responses.java:148)
at com.datastax.driver.core.DefaultResultSetFuture.onSet(DefaultResultSetFuture.java:179)
at com.datastax.driver.core.RequestHandler.setFinalResult(RequestHandler.java:179)
at com.datastax.driver.core.RequestHandler.access$2400(RequestHandler.java:49)
at com.datastax.driver.core.RequestHandler$SpeculativeExecution.setFinalResult(RequestHandler.java:799)
at com.datastax.driver.core.RequestHandler$SpeculativeExecution.onSet(RequestHandler.java:633)
at com.datastax.driver.core.Connection$Dispatcher.channelRead0(Connection.java:1075)
at com.datastax.driver.core.Connection$Dispatcher.channelRead0(Connection.java:998)
at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)
at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:287)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)
at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)
at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:293)
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:267)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)
at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1294)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:911)
at io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:934)
at io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:397)
at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:302)
at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:131)
at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:144)
at java.lang.Thread.run(Thread.java:748)
请问这个问题是怎样引起的呢?

Table 'hugegraph+si' is not opened

Caused by: com.baidu.hugegraph.backend.BackendException: Table 'hugegraph+si' is not opened
at com.baidu.hugegraph.backend.store.rocksdb.RocksDBStdSessions.cf(RocksDBStdSessions.java:186)
at com.baidu.hugegraph.backend.store.rocksdb.RocksDBStdSessions.access$000(RocksDBStdSessions.java:58)
at com.baidu.hugegraph.backend.store.rocksdb.RocksDBStdSessions$StdSession.scan(RocksDBStdSessions.java:514)
at com.baidu.hugegraph.backend.store.rocksdb.RocksDBTable.queryById(RocksDBTable.java:147)
at com.baidu.hugegraph.backend.store.rocksdb.RocksDBTables$SecondaryIndex.queryByCond(RocksDBTables.java:179)
at com.baidu.hugegraph.backend.store.rocksdb.RocksDBTable.query(RocksDBTable.java:133)
at com.baidu.hugegraph.backend.store.rocksdb.RocksDBStore.query(RocksDBStore.java:266)
at com.baidu.hugegraph.backend.tx.AbstractTransaction.query(AbstractTransaction.java:104)
at com.baidu.hugegraph.backend.tx.SchemaIndexTransaction.queryByName(SchemaIndexTransaction.java:96)
at com.baidu.hugegraph.backend.tx.SchemaIndexTransaction.query(SchemaIndexTransaction.java:75)
at com.baidu.hugegraph.backend.tx.SchemaTransaction.getSchema(SchemaTransaction.java:315)
at com.baidu.hugegraph.backend.cache.CachedSchemaTransaction.lambda$getSchema$3(CachedSchemaTransaction.java:192)
at com.baidu.hugegraph.backend.cache.CachedSchemaTransaction.getOrFetch(CachedSchemaTransaction.java:152)
at com.baidu.hugegraph.backend.cache.CachedSchemaTransaction.getSchema(CachedSchemaTransaction.java:191)
at com.baidu.hugegraph.backend.tx.SchemaTransaction.getPropertyKey(SchemaTransaction.java:105)
at com.baidu.hugegraph.HugeGraph.propertyKey(HugeGraph.java:312)
at com.baidu.hugegraph.traversal.optimize.TraversalUtil.convCompare2UserpropRelation(TraversalUtil.java:322)
at com.baidu.hugegraph.traversal.optimize.TraversalUtil.convCompare2Relation(TraversalUtil.java:274)
at com.baidu.hugegraph.traversal.optimize.TraversalUtil.convHas2Condition(TraversalUtil.java:192)
at com.baidu.hugegraph.traversal.optimize.TraversalUtil.fillConditionQuery(TraversalUtil.java:178)
at com.baidu.hugegraph.traversal.optimize.HugeGraphStep.vertices(HugeGraphStep.java:87)
at com.baidu.hugegraph.traversal.optimize.HugeGraphStep.lambda$new$0(HugeGraphStep.java:66)
at org.apache.tinkerpop.gremlin.process.traversal.step.map.GraphStep.processNextStart(GraphStep.java:139)
at org.apache.tinkerpop.gremlin.process.traversal.step.util.AbstractStep.hasNext(AbstractStep.java:143)
at org.apache.tinkerpop.gremlin.process.traversal.step.util.ExpandableStepIterator.next(ExpandableStepIterator.java:50)
at org.apache.tinkerpop.gremlin.process.traversal.step.branch.RepeatStep.standardAlgorithm(RepeatStep.java:188)
at org.apache.tinkerpop.gremlin.process.traversal.step.util.ComputerAwareStep.processNextStart(ComputerAwareStep.java:46)
at org.apache.tinkerpop.gremlin.process.traversal.step.util.AbstractStep.hasNext(AbstractStep.java:143)
at org.apache.tinkerpop.gremlin.process.traversal.step.util.ExpandableStepIterator.next(ExpandableStepIterator.java:50)
at org.apache.tinkerpop.gremlin.process.traversal.step.map.MapStep.processNextStart(MapStep.java:36)
at org.apache.tinkerpop.gremlin.process.traversal.step.map.PathStep.processNextStart(PathStep.java:117)
at org.apache.tinkerpop.gremlin.process.traversal.step.util.AbstractStep.hasNext(AbstractStep.java:143)
at org.apache.tinkerpop.gremlin.process.traversal.util.DefaultTraversal.hasNext(DefaultTraversal.java:192)
at org.apache.tinkerpop.gremlin.util.iterator.IteratorUtils.fill(IteratorUtils.java:62)
at org.apache.tinkerpop.gremlin.util.iterator.IteratorUtils.list(IteratorUtils.java:85)
at org.apache.tinkerpop.gremlin.util.iterator.IteratorUtils.asList(IteratorUtils.java:382)
at org.apache.tinkerpop.gremlin.server.handler.HttpGremlinEndpointHandler.lambda$channelRead$1(HttpGremlinEndpointHandler.java:239)
at org.apache.tinkerpop.gremlin.util.function.FunctionUtils.lambda$wrapFunction$0(FunctionUtils.java:36)
at org.apache.tinkerpop.gremlin.groovy.engine.GremlinExecutor.lambda$eval$2(GremlinExecutor.java:320)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at com.baidu.hugegraph.auth.HugeGraphAuthProxy$ContextTask.run(HugeGraphAuthProxy.java:278)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)

hugeclient连接失败的问题

该问题似乎与#4问题类似,都属于failed to do request
但我按照建议方法在新建的干净项目中运行,仍未解决。。。

Expected behavior 期望表现

期望连接成功

Actual behavior 实际表现

实际抛出异常
Exception in thread "main" java.lang.reflect.UndeclaredThrowableException
at com.sun.proxy.$Proxy30.create(Unknown Source)
at myproject.myproject.SingleExample.main(SingleExample.java:26)
Caused by: java.lang.reflect.InvocationTargetException
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at com.baidu.hugegraph.structure.schema.BuilderProxy.invoke(BuilderProxy.java:56)
... 2 more
Caused by: com.baidu.hugegraph.rest.ClientException: Failed to do request
at com.baidu.hugegraph.rest.RestClient.request(RestClient.java:69)
at com.baidu.hugegraph.rest.RestClient.post(RestClient.java:115)
at com.baidu.hugegraph.rest.RestClient.post(RestClient.java:79)
at com.baidu.hugegraph.rest.RestClient.post(RestClient.java:74)
at com.baidu.hugegraph.api.schema.PropertyKeyAPI.create(PropertyKeyAPI.java:43)
at com.baidu.hugegraph.driver.SchemaManager.addPropertyKey(SchemaManager.java:75)
at com.baidu.hugegraph.structure.schema.PropertyKey$BuilderImpl.create(PropertyKey.java:116)
at com.baidu.hugegraph.structure.schema.PropertyKey$BuilderImpl.create(PropertyKey.java:99)
... 7 more
Caused by: javax.ws.rs.ProcessingException: javax/xml/bind/annotation/XmlElement
at org.glassfish.jersey.client.ClientRuntime.invoke(ClientRuntime.java:261)
at org.glassfish.jersey.client.JerseyInvocation$1.call(JerseyInvocation.java:684)
at org.glassfish.jersey.client.JerseyInvocation$1.call(JerseyInvocation.java:681)
at org.glassfish.jersey.internal.Errors.process(Errors.java:315)
at org.glassfish.jersey.internal.Errors.process(Errors.java:297)
at org.glassfish.jersey.internal.Errors.process(Errors.java:228)
at org.glassfish.jersey.process.internal.RequestScope.runInScope(RequestScope.java:444)
at org.glassfish.jersey.client.JerseyInvocation.invoke(JerseyInvocation.java:681)
at org.glassfish.jersey.client.JerseyInvocation$Builder.method(JerseyInvocation.java:437)
at org.glassfish.jersey.client.JerseyInvocation$Builder.post(JerseyInvocation.java:343)
at com.baidu.hugegraph.rest.RestClient.lambda$post$0(RestClient.java:116)
at com.baidu.hugegraph.rest.RestClient.request(RestClient.java:67)
... 14 more
Caused by: java.lang.NoClassDefFoundError: javax/xml/bind/annotation/XmlElement
at com.fasterxml.jackson.module.jaxb.JaxbAnnotationIntrospector.(JaxbAnnotationIntrospector.java:139)
at com.fasterxml.jackson.module.jaxb.JaxbAnnotationIntrospector.(JaxbAnnotationIntrospector.java:126)
at com.fasterxml.jackson.module.jaxb.JaxbAnnotationIntrospector.(JaxbAnnotationIntrospector.java:118)
at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:488)
at java.base/java.lang.Class.newInstance(Class.java:560)
at com.fasterxml.jackson.jaxrs.json.JsonMapperConfigurator._resolveIntrospector(JsonMapperConfigurator.java:111)
at com.fasterxml.jackson.jaxrs.json.JsonMapperConfigurator._resolveIntrospectors(JsonMapperConfigurator.java:84)
at com.fasterxml.jackson.jaxrs.cfg.MapperConfiguratorBase._setAnnotations(MapperConfiguratorBase.java:120)
at com.fasterxml.jackson.jaxrs.json.JsonMapperConfigurator.getDefaultMapper(JsonMapperConfigurator.java:45)
at com.fasterxml.jackson.jaxrs.base.ProviderBase.locateMapper(ProviderBase.java:925)
at com.fasterxml.jackson.jaxrs.base.ProviderBase._endpointForWriting(ProviderBase.java:686)
at com.fasterxml.jackson.jaxrs.base.ProviderBase.writeTo(ProviderBase.java:558)
at org.glassfish.jersey.message.internal.WriterInterceptorExecutor$TerminalWriterInterceptor.invokeWriteTo(WriterInterceptorExecutor.java:265)
at org.glassfish.jersey.message.internal.WriterInterceptorExecutor$TerminalWriterInterceptor.aroundWriteTo(WriterInterceptorExecutor.java:250)
at org.glassfish.jersey.message.internal.WriterInterceptorExecutor.proceed(WriterInterceptorExecutor.java:162)
at org.glassfish.jersey.spi.ContentEncoder.aroundWriteTo(ContentEncoder.java:138)
at org.glassfish.jersey.message.internal.WriterInterceptorExecutor.proceed(WriterInterceptorExecutor.java:162)
at org.glassfish.jersey.message.internal.MessageBodyFactory.writeTo(MessageBodyFactory.java:1130)
at org.glassfish.jersey.client.ClientRequest.doWriteEntity(ClientRequest.java:517)
at org.glassfish.jersey.client.ClientRequest.writeEntity(ClientRequest.java:499)
at org.glassfish.jersey.client.internal.HttpUrlConnector._apply(HttpUrlConnector.java:393)
at org.glassfish.jersey.client.internal.HttpUrlConnector.apply(HttpUrlConnector.java:285)
at org.glassfish.jersey.client.ClientRuntime.invoke(ClientRuntime.java:252)
... 25 more
Caused by: java.lang.ClassNotFoundException: javax.xml.bind.annotation.XmlElement
at java.base/jdk.internal.loader.BuiltinClassLoader.loadClass(BuiltinClassLoader.java:582)
at java.base/jdk.internal.loader.ClassLoaders$AppClassLoader.loadClass(ClassLoaders.java:190)
at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:499)
... 51 more

Steps to reproduce the problem 复现步骤

  1. 创建maven工程quickstart
  2. 添加依赖
    3.对pom.xml进行maven install
    4.运行singleExample类

Status of loaded data 数据状态

Vertex/Edge summary 数据量

  • loaded vertices amount: {like 10 million}
  • loaded edges amount: {like 20 million}
  • loaded time: {like 200s}

Vertex/Edge example 数据示例

{type something here...}

Schema(VertexLabel, EdgeLabel, IndexLabel) 元数据结构

{type something here...}

Specifications of environment 环境信息

  • hugegraph version: 0.7.4 已启动,hugegraph-client:1.5.8
  • operating system: ubuntu18.04
  • hugegraph backend: RocksDB

在docker中启动hugegraph-server失败

###尝试了用memory方式启动
log如下

2018-08-08 02:24:53 1272  [main] [INFO ] com.baidu.hugegraph.dist.HugeGremlinServer [] - 
         \,,,/
         (o o)
-----oOOo-(3)-oOOo-----

2018-08-08 02:24:53 1494  [main] [INFO ] com.baidu.hugegraph.dist.HugeGremlinServer [] - Configuring Gremlin Server from conf/gremlin-server.yaml
2018-08-08 02:24:54 1987  [main] [INFO ] com.baidu.hugegraph.HugeGraph [] - Opening backend store 'memory' for graph 'hugegraph'
2018-08-08 02:24:54 2015  [main] [INFO ] org.apache.tinkerpop.gremlin.server.GremlinServer [] - Graph [hugegraph] was successfully configured via [conf/hugegraph.properties].
2018-08-08 02:24:54 2015  [main] [INFO ] org.apache.tinkerpop.gremlin.server.GremlinServer [] - Graph [hugegraph] was successfully configured via [conf/hugegraph.properties].
2018-08-08 02:24:54 2016  [main] [INFO ] org.apache.tinkerpop.gremlin.server.util.ServerGremlinExecutor [] - Initialized Gremlin thread pool.  Threads in pool named with pattern gremlin-*
2018-08-08 02:24:54 2016  [main] [INFO ] org.apache.tinkerpop.gremlin.server.util.ServerGremlinExecutor [] - Initialized Gremlin thread pool.  Threads in pool named with pattern gremlin-*
2018-08-08 02:24:56 3612  [main] [INFO ] org.apache.tinkerpop.gremlin.groovy.engine.ScriptEngines [] - Loaded gremlin-groovy ScriptEngine
2018-08-08 02:24:56 3612  [main] [INFO ] org.apache.tinkerpop.gremlin.groovy.engine.ScriptEngines [] - Loaded gremlin-groovy ScriptEngine
2018-08-08 02:24:57 4945  [main] [INFO ] org.apache.tinkerpop.gremlin.groovy.engine.GremlinExecutor [] - Initialized gremlin-groovy ScriptEngine with scripts/empty-sample.groovy
2018-08-08 02:24:57 4945  [main] [INFO ] org.apache.tinkerpop.gremlin.groovy.engine.GremlinExecutor [] - Initialized gremlin-groovy ScriptEngine with scripts/empty-sample.groovy
2018-08-08 02:24:57 4946  [main] [INFO ] org.apache.tinkerpop.gremlin.server.util.ServerGremlinExecutor [] - Initialized GremlinExecutor and preparing GremlinScriptEngines instances.
2018-08-08 02:24:57 4946  [main] [INFO ] org.apache.tinkerpop.gremlin.server.util.ServerGremlinExecutor [] - Initialized GremlinExecutor and preparing GremlinScriptEngines instances.
2018-08-08 02:24:57 5038  [main] [INFO ] org.apache.tinkerpop.gremlin.server.util.ServerGremlinExecutor [] - Initialized gremlin-groovy GremlinScriptEngine and registered metrics
2018-08-08 02:24:57 5038  [main] [INFO ] org.apache.tinkerpop.gremlin.server.util.ServerGremlinExecutor [] - Initialized gremlin-groovy GremlinScriptEngine and registered metrics
2018-08-08 02:24:57 5052  [main] [INFO ] org.apache.tinkerpop.gremlin.server.util.MetricManager [] - Configured Metrics CsvReporter configured with report interval=180000ms to fileName=/tmp/gremlin-server-metrics.csv
2018-08-08 02:24:57 5052  [main] [INFO ] org.apache.tinkerpop.gremlin.server.util.MetricManager [] - Configured Metrics CsvReporter configured with report interval=180000ms to fileName=/tmp/gremlin-server-metrics.csv
2018-08-08 02:24:57 5149  [main] [INFO ] org.apache.tinkerpop.gremlin.server.GremlinServer [] - Executing start up LifeCycleHook
2018-08-08 02:24:57 5149  [main] [INFO ] org.apache.tinkerpop.gremlin.server.GremlinServer [] - Executing start up LifeCycleHook
2018-08-08 02:24:57 5196  [main] [INFO ] org.apache.tinkerpop.gremlin.server.GremlinServer [] - Executed once at startup of Gremlin Server.
2018-08-08 02:24:57 5196  [main] [INFO ] org.apache.tinkerpop.gremlin.server.GremlinServer [] - Executed once at startup of Gremlin Server.
2018-08-08 02:24:57 5390  [main] [INFO ] org.apache.tinkerpop.gremlin.server.AbstractChannelizer [] - Configured application/vnd.gremlin-v1.0+gryo-lite with org.apache.tinkerpop.gremlin.driver.ser.GryoLiteMessageSerializerV1d0
2018-08-08 02:24:57 5390  [main] [INFO ] org.apache.tinkerpop.gremlin.server.AbstractChannelizer [] - Configured application/vnd.gremlin-v1.0+gryo-lite with org.apache.tinkerpop.gremlin.driver.ser.GryoLiteMessageSerializerV1d0
2018-08-08 02:24:57 5391  [main] [INFO ] org.apache.tinkerpop.gremlin.server.AbstractChannelizer [] - Configured application/vnd.gremlin-v1.0+gryo-stringd with org.apache.tinkerpop.gremlin.driver.ser.GryoMessageSerializerV1d0
2018-08-08 02:24:57 5391  [main] [INFO ] org.apache.tinkerpop.gremlin.server.AbstractChannelizer [] - Configured application/vnd.gremlin-v1.0+gryo-stringd with org.apache.tinkerpop.gremlin.driver.ser.GryoMessageSerializerV1d0
2018-08-08 02:24:58 5828  [main] [INFO ] org.apache.tinkerpop.gremlin.server.AbstractChannelizer [] - Configured application/vnd.gremlin-v1.0+json with org.apache.tinkerpop.gremlin.driver.ser.GraphSONMessageSerializerGremlinV1d0
2018-08-08 02:24:58 5828  [main] [INFO ] org.apache.tinkerpop.gremlin.server.AbstractChannelizer [] - Configured application/vnd.gremlin-v1.0+json with org.apache.tinkerpop.gremlin.driver.ser.GraphSONMessageSerializerGremlinV1d0
2018-08-08 02:24:58 5969  [main] [INFO ] org.apache.tinkerpop.gremlin.server.AbstractChannelizer [] - Configured application/vnd.gremlin-v2.0+json with org.apache.tinkerpop.gremlin.driver.ser.GraphSONMessageSerializerGremlinV2d0
2018-08-08 02:24:58 5969  [main] [INFO ] org.apache.tinkerpop.gremlin.server.AbstractChannelizer [] - Configured application/vnd.gremlin-v2.0+json with org.apache.tinkerpop.gremlin.driver.ser.GraphSONMessageSerializerGremlinV2d0
2018-08-08 02:24:58 5976  [main] [INFO ] org.apache.tinkerpop.gremlin.server.AbstractChannelizer [] - Configured application/json with org.apache.tinkerpop.gremlin.driver.ser.GraphSONMessageSerializerV1d0
2018-08-08 02:24:58 5976  [main] [INFO ] org.apache.tinkerpop.gremlin.server.AbstractChannelizer [] - Configured application/json with org.apache.tinkerpop.gremlin.driver.ser.GraphSONMessageSerializerV1d0
2018-08-08 02:24:58 6139  [gremlin-server-boss-1] [INFO ] org.apache.tinkerpop.gremlin.server.GremlinServer [] - Gremlin Server configured with worker thread pool of 1, gremlin pool of 1 and boss thread pool of 1.
2018-08-08 02:24:58 6139  [gremlin-server-boss-1] [INFO ] org.apache.tinkerpop.gremlin.server.GremlinServer [] - Gremlin Server configured with worker thread pool of 1, gremlin pool of 1 and boss thread pool of 1.
2018-08-08 02:24:58 6140  [gremlin-server-boss-1] [INFO ] org.apache.tinkerpop.gremlin.server.GremlinServer [] - Channel started at port 8182.
2018-08-08 02:24:58 6140  [gremlin-server-boss-1] [INFO ] org.apache.tinkerpop.gremlin.server.GremlinServer [] - Channel started at port 8182.
2018-08-08 02:24:58 6158  [main] [INFO ] com.baidu.hugegraph.server.RestServer [] - RestServer starting...
Aug 08, 2018 2:25:01 AM org.glassfish.grizzly.http.server.NetworkListener start
INFO: Started listener bound to [172.17.0.2:8080]
2018-08-08 02:25:01 9102  [main] [INFO ] com.baidu.hugegraph.server.RestServer [] - Graph 'hugegraph' was successfully configured via 'conf/hugegraph.properties'
2018-08-08 02:25:02 9674  [main] [ERROR] com.baidu.hugegraph.server.RestServer [] - The backend store of 'hugegraph' has not been initialized

貌似只是说后端存储初始化失败,memory方式不是不用初始化吗??

###尝试用RocksDB方式启动
log如下

Initing HugeGraph Store...
2018-08-08 02:41:11 1166  [main] [INFO ] com.baidu.hugegraph.cmd.InitStore [] - Init graph with config file: conf/hugegraph.properties
2018-08-08 02:41:11 1333  [main] [INFO ] com.baidu.hugegraph.HugeGraph [] - Opening backend store 'rocksdb' for graph 'hugegraph'
2018-08-08 02:41:11 1410  [main] [INFO ] com.baidu.hugegraph.backend.store.rocksdb.RocksDBStore [] - Opening RocksDB with data path: /tmp/schema
2018-08-08 02:41:12 1715  [main] [INFO ] com.baidu.hugegraph.backend.store.rocksdb.RocksDBStore [] - Failed to open RocksDB '/tmp/schema' with database 'hugegraph', try to init CF later
2018-08-08 02:41:12 1788  [main] [INFO ] com.baidu.hugegraph.backend.store.rocksdb.RocksDBStore [] - Opening RocksDB with data path: /tmp/system
2018-08-08 02:41:12 1876  [main] [INFO ] com.baidu.hugegraph.backend.store.rocksdb.RocksDBStore [] - Failed to open RocksDB '/tmp/system' with database 'hugegraph', try to init CF later
2018-08-08 02:41:12 1888  [main] [INFO ] com.baidu.hugegraph.backend.store.rocksdb.RocksDBStore [] - Opening RocksDB with data path: /tmp/graph
2018-08-08 02:41:12 1902  [main] [INFO ] com.baidu.hugegraph.backend.store.rocksdb.RocksDBStore [] - Failed to open RocksDB '/tmp/graph' with database 'hugegraph', try to init CF later
2018-08-08 02:41:12 1966  [main] [INFO ] com.baidu.hugegraph.backend.store.rocksdb.RocksDBStore [] - Store initialized: schema
2018-08-08 02:41:12 2000  [main] [INFO ] com.baidu.hugegraph.backend.store.rocksdb.RocksDBStore [] - Store initialized: system
2018-08-08 02:41:12 2036  [main] [INFO ] com.baidu.hugegraph.backend.store.rocksdb.RocksDBStore [] - Store initialized: graph
2018-08-08 02:41:12 2293  [pool-3-thread-1] [INFO ] com.baidu.hugegraph.backend.Transaction [] - Clear cache on event 'store.init'

rocksdb要写的目录我放在tmp目录,应该不会没有权限

然后,我同样的方式在虚拟机中是可以启动的2333333

hugegraph-benchmark performance 测试报告里面的 社区发现算法是怎么实现的

Expected behavior 期望表现

图综合性能测试-CW 是怎么实现的,是rest api吗

Actual behavior 实际表现

{type something here...}

Steps to reproduce the problem 复现步骤

  1. {step 1}
  2. {step 2}
  3. {step 3}

Status of loaded data 数据状态

Vertex/Edge summary 数据量

  • loaded vertices amount: {like 10 million}
  • loaded edges amount: {like 20 million}
  • loaded time: {like 200s}

Vertex/Edge example 数据示例

{type something here...}

Schema(VertexLabel, EdgeLabel, IndexLabel) 元数据结构

{type something here...}

Specifications of environment 环境信息

  • hugegraph version: {like v0.7.4}
  • operating system: {like centos 7.4, 32 CPUs, 64G RAM}
  • hugegraph backend: {like cassandra 3.10, cluster with 20 nodes, 3 x 1TB HDD disk each node}

hugegraph在进行出度遍历时存在性能问题

描述:从一个已知节点出发,向外(out)遍历,直至不存在叶子节点为止
语句:g.V().hasId("xxx").repeat(out("look")).until(outE("look").count().is(0)).path()
数据:3000多条边+3000多个顶点; 上述语句执行花了12s+ 请问什么原因 可否优化

count() 查询报错

千万级别的点数据,以hbase作为存储,使用 g.V().count()查询会报超时,改了超时配置之后还是会报hbase相关的错误
java.io.InterruptedIOException: null
at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:214) ~[hbase-client-2.0.0.jar:2.0.0]
at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:58) ~[hbase-client-2.0.0.jar:2.0.0]
at org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithoutRetries(RpcRetryingCallerImpl.java:192) ~[hbase-client-2.0.0.jar:2.0.0]
at org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:269) ~[hbase-client-2.0.0.jar:2.0.0]
at org.apache.hadoop.hbase.client.ClientScanner.loadCache(ClientScanner.java:437) ~[hbase-client-2.0.0.jar:2.0.0]
at org.apache.hadoop.hbase.client.ClientScanner.nextWithSyncCache(ClientScanner.java:312) ~[hbase-client-2.0.0.jar:2.0.0]
at org.apache.hadoop.hbase.client.ClientScanner.next(ClientScanner.java:597) ~[hbase-client-2.0.0.jar:2.0.0]
at org.apache.hadoop.hbase.client.ResultScanner$1.hasNext(ResultScanner.java:53) ~[hbase-client-2.0.0.jar:2.0.0]
at com.baidu.hugegraph.backend.store.hbase.HbaseSessions$RowIterator.hasNext(HbaseSessions.java:474) ~[hugegraph-hbase-0.7.4.jar:?]
at com.baidu.hugegraph.backend.serializer.BinaryEntryIterator.fetch(BinaryEntryIterator.java:71) ~[hugegraph-core-0.7.4.jar:0.7.4.0]
at com.baidu.hugegraph.backend.store.BackendEntryIterator.hasNext(BackendEntryIterator.java:55) ~[hugegraph-core-0.7.4.jar:0.7.4.0]
at com.baidu.hugegraph.iterator.ExtendableIterator.fetch(ExtendableIterator.java:89) ~[hugegraph-common-1.4.9.jar:1.4.9.0]
at com.baidu.hugegraph.iterator.WrappedIterator.hasNext(WrappedIterator.java:41) ~[hugegraph-common-1.4.9.jar:1.4.9.0]
at com.baidu.hugegraph.iterator.MapperIterator.fetch(MapperIterator.java:42) ~[hugegraph-common-1.4.9.jar:1.4.9.0]
at com.baidu.hugegraph.iterator.WrappedIterator.hasNext(WrappedIterator.java:41) ~[hugegraph-common-1.4.9.jar:1.4.9.0]
at com.baidu.hugegraph.iterator.FilterIterator.fetch(FilterIterator.java:42) ~[hugegraph-common-1.4.9.jar:1.4.9.0]

huge-server启动失败

用一键部署方案,启动报错。Failed to start HugeGraphServer, requires at least 512m free memory
已在多台机器上测试,机器剩余内存分别为180G,50G,4G。系统环境为Centos6.7。

执行demo报错 : The name of property key can't be null

本地启动了hugegraph使用下面链接的demo执行 , 新建schema时报错 :
https://hugegraph.github.io/hugegraph-doc//quickstart/hugegraph-client.html

schema.propertyKey("name").asText().ifNotExist().create();
Caused by: com.baidu.hugegraph.exception.ServerException: The name of property key can't be null

逐一排查项目依赖 , 发现去掉对fastjson的依赖 , 或强制指定fastjson的依赖为1.1.x , 可以解决此问题 ;

我使用的hugegraph版本:
client : 1.5.8
server : 0.7.4

复现此问题的方法 :
在上面demo项目里面加上对 fastjson 1.2.x 的依赖 : com.alibaba:fastjson:1.2.44

添加索引报错 com.baidu.hugegraph.exception.LimitExceedException: Too many records(must <=800000) for the query

场景:先导入数据,131w左右;发现有些列需要查询,然后创建索引。
版本:hugegraph-0.7.4 + rocksDB
异常栈:
2018-08-27 11:37:29 3702792 [pool-5-thread-1] [WARN ] com.baidu.hugegraph.task.HugeTask [] - An exception occurred when running task: 1
com.baidu.hugegraph.exception.LimitExceedException: Too many records(must <=800000) for the query: Query for VERTEX offset=0, limit=9223372036854775807, order by {} where id in [2:"Doe"!"Anna"!""!"Other"!1/1/1963, 2:"Lam"!"Thi"!""!"Green"!1/11/1956, 2:"Bond"!"Tina"!""!"Green"!3/3/1969, 2:"Guy"!"Alex"!""!"Green"!2/27/1985, 2:"Bean"!"Eric"!"A"!"Green"!7/1...
at com.baidu.hugegraph.backend.query.Query.checkCapacity(Query.java:203) ~[hugegraph-core-0.7.4.jar:0.7.4.0]
at com.baidu.hugegraph.backend.query.IdQuery.query(IdQuery.java:76) ~[hugegraph-core-0.7.4.jar:0.7.4.0]
at com.baidu.hugegraph.backend.query.IdQuery.query(IdQuery.java:82) ~[hugegraph-core-0.7.4.jar:0.7.4.0]
at com.baidu.hugegraph.backend.query.IdQuery.(IdQuery.java:61) ~[hugegraph-core-0.7.4.jar:0.7.4.0]
at com.baidu.hugegraph.backend.tx.GraphIndexTransaction.query(GraphIndexTransaction.java:360) ~[hugegraph-core-0.7.4.jar:0.7.4.0]
at com.baidu.hugegraph.backend.tx.GraphTransaction.optimizeQuery(GraphTransaction.java:961) ~[hugegraph-core-0.7.4.jar:0.7.4.0]
at com.baidu.hugegraph.backend.tx.GraphTransaction.query(GraphTransaction.java:292) ~[hugegraph-core-0.7.4.jar:0.7.4.0]
at com.baidu.hugegraph.backend.tx.GraphTransaction.queryVerticesFromBackend(GraphTransaction.java:452) ~[hugegraph-core-0.7.4.jar:0.7.4.0]
at com.baidu.hugegraph.backend.tx.GraphTransaction.queryVertices(GraphTransaction.java:443) ~[hugegraph-core-0.7.4.jar:0.7.4.0]
at com.baidu.hugegraph.backend.cache.CachedGraphTransaction.queryVertices(CachedGraphTransaction.java:93) ~[hugegraph-core-0.7.4.jar:0.7.4.0]
at com.baidu.hugegraph.backend.tx.GraphIndexTransaction.rebuildIndex(GraphIndexTransaction.java:1109) ~[hugegraph-core-0.7.4.jar:0.7.4.0]
at com.baidu.hugegraph.backend.tx.GraphIndexTransaction.rebuildIndex(GraphIndexTransaction.java:1047) ~[hugegraph-core-0.7.4.jar:0.7.4.0]
at com.baidu.hugegraph.backend.tx.GraphTransaction.rebuildIndex(GraphTransaction.java:1237) ~[hugegraph-core-0.7.4.jar:0.7.4.0]
at com.baidu.hugegraph.job.schema.RebuildIndexCallable.runTask(RebuildIndexCallable.java:36) ~[hugegraph-core-0.7.4.jar:0.7.4.0]
at com.baidu.hugegraph.job.schema.SchemaCallable.call(SchemaCallable.java:18) ~[hugegraph-core-0.7.4.jar:0.7.4.0]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_181]
at com.baidu.hugegraph.task.HugeTask.run(HugeTask.java:198) [hugegraph-core-0.7.4.jar:0.7.4.0]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [?:1.8.0_181]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_181]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_181]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_181]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_181]

Speed up the tinkerpop test

There are several reasons for affecting the tinkerpop test speed:

  1. rocksdb clear-schema would delete index-label too many times due to need to scan prefix
  2. clear schema even if the database is empty
  3. clear variables even if it's empty, and it will init variables schema at the same time if not exist
  4. init and clear basic schema when load non basic graph

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.