Git Product home page Git Product logo

vesoft-inc / nebula Goto Github PK

View Code? Open in Web Editor NEW
10.1K 185.0 1.2K 67.96 MB

A distributed, fast open-source graph database featuring horizontal scalability and high availability

Home Page: https://nebula-graph.io

License: Apache License 2.0

CMake 1.41% C++ 66.13% Thrift 0.27% Yacc 0.93% Lex 0.24% Shell 0.18% C 0.02% Python 2.76% Dockerfile 0.01% Makefile 0.04% Gherkin 28.01%
graph-database distributed database graphdb raft cpp nebula-graph nebula graph nebulagraph

nebula's Introduction


English | 中文
A distributed, scalable, lightning-fast graph database

Stack Overflow code coverage nightly build GitHub stars GitHub forks GitHub TODOs

NebulaGraph

Introduction

NebulaGraph is a popular open-source graph database that can handle large volumes of data with milliseconds of latency, scale up quickly, and have the ability to perform fast graph analytics. NebulaGraph has been widely used for social media, recommendation systems, knowledge graphs, security, capital flows, AI, etc. See our users.

The following lists some of NebulaGraph features:

  • Symmetrically distributed
  • Storage and computing separation
  • Horizontal scalability
  • Strong data consistency by RAFT protocol
  • OpenCypher-compatible query language
  • Role-based access control for higher-level security
  • Different types of graph analytics algorithms

The following figure shows the architecture of the NebulaGraph core. NebulaGraph Architecture

Learn more on NebulaGraph website.

Quick start

Read the getting started docs for a quick start.

Using NebulaGraph

NebulaGraph is a distributed graph database with multiple components. You can download or try in following ways:

Getting help

In case you encounter any problems playing around NebulaGraph, please reach out for help:

DevTools

NebulaGraph comes with a set of tools to help you manage and monitor your graph database. See Ecosystem.

Contributing

Contributions are warmly welcomed and greatly appreciated. And here are a few ways you can contribute:

Landscape


NebulaGraph enriches the CNCF Database Landscape.

Licensing

NebulaGraph is under Apache 2.0 license, so you can freely download, modify, and deploy the source code to meet your needs.
You can also freely deploy NebulaGraph as a back-end service to support your SaaS deployment.

Contact

Community

Join NebulaGraph Community Where to Find us
Asking Questions Stack Overflow Discussions
Chat with Community Members Chat History Slack
NebulaGraph Meetup Google Calendar Zoom Meetup Meeting Archive
Chat, Asking, or Meeting in Chinese WeChat Group Tencent_Meeting Discourse

If you find NebulaGraph interesting, please ⭐️ Star it at the top of the GitHub page.

nebula's People

Contributors

aiee avatar ayyt avatar boshengchen avatar bright-starry-sky avatar cangfengzhs avatar codesigner avatar cpwstatic avatar critical27 avatar czpmango avatar dangleptr avatar darionyaphet avatar dutor avatar harrischu avatar jackwener avatar jievince avatar laura-ding avatar liuyu85cn avatar liwenhui-soul avatar monadbobo avatar nevermore3 avatar panda-sheep avatar pengweisong avatar sherman-the-tank avatar shinji-ikarig avatar shylock-hg avatar sophie-xie avatar superyoko avatar xtcyclist avatar yixinglu avatar zlcook avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

nebula's Issues

[common] Add http interface for admin

The http server has been supported in third-party/proxygen, we could use it to support some admin interfaces.
For example, we need to set gflags, show some running states etc..

heap-use-after-free failure on log_cas_test

This bug appears not every time.

It seems some resources are being accessed after the service already stopped.

17: ==4393==ERROR: AddressSanitizer: heap-use-after-free on address 0x60400011de10 at pc 0x0000006dc8b9 bp 0x7f914d6743c0 sp 0x7f914d6743b0
17: Could not create logging file: No such file or directory
17: COULD NOT CREATE A LOGGINGFILE 20190114-163350.4393!I0114 16:33:50.038925  4393 RaftexService.cpp:89] Stopping the raftex service on port 35935
17: I0114 16:33:50.039160  4393 RaftexService.cpp:96] All partitions have stopped
17: READ of size 8 at 0x60400011de10 thread T1101
17: I0114 16:33:50.039450  5677 RaftexService.cpp:61] The Raftex Service stopped
17:     #0 0x6dc8b8 in std::__detail::_Node_iterator_base<std::pair<std::pair<int, int> const, std::shared_ptr<nebula::raftex::Host> >, true>::_M_incr() /usr/include/c++/8/bits/hashtable_policy.h:300
17:     #1 0x6dc8b8 in std::__detail::_Node_iterator<std::pair<std::pair<int, int> const, std::shared_ptr<nebula::raftex::Host> >, false, true>::operator++() /usr/include/c++/8/bits/hashtable_policy.h:355
17:     #2 0x6dc8b8 in foreach<folly::gen::detail::Map<Predicate>::Generator<Value, Source, Result>::foreach(Body&&) const [with Body = folly::gen::detail::CollectTemplate<Collection, Allocator>::compose(const folly::gen::GenImpl<Value, Source>&) const [with Value = folly::Future<nebula::raftex::cpp2::AppendLogResponse>&&; Source = folly::gen::detail::Map<nebula::raftex::RaftPart::sendHeartbeat()::<lambda(PeerHostEntry&)> >::Generator<std::pair<const std::pair<int, int>, std::shared_ptr<nebula::raftex::Host> >&, folly::gen::detail::ReferencedSource<std::unordered_map<std::pair<int, int>, std::shared_ptr<nebula::raftex::Host> >, std::pair<const std::pair<int, int>, std::shared_ptr<nebula::raftex::Host> >&>, folly::Future<nebula::raftex::cpp2::AppendLogResponse>&&>; StorageType = folly::Future<nebula::raftex::cpp2::AppendLogResponse>; Collection = std::vector<folly::Future<nebula::raftex::cpp2::AppendLogResponse>, std::allocator<folly::Future<nebula::raftex::cpp2::AppendLogResponse> > >; Container = std::vector; Allocator = std::allocator]::<lambda(folly::Future<nebula::raftex::cpp2::AppendLogResponse>&&)>; Value = std::pair<const std::pair<int, int>, std::shared_ptr<nebula::raftex::Host> >&; Source = folly::gen::detail::ReferencedSource<std::unordered_map<std::pair<int, int>, std::shared_ptr<nebula::raftex::Host> >, std::pair<const std::pair<int, int>, std::shared_ptr<nebula::raftex::Host> >&>; Result = folly::Future<nebula::raftex::cpp2::AppendLogResponse>&&; Predicate = nebula::raftex::RaftPart::sendHeartbeat()::<lambda(PeerHostEntry&)>]::<lambda(std::pair<const std::pair<int, int>, std::shared_ptr<nebula::raftex::Host> >&)> > /home/dutor/Wdir/nebula/third-party/folly/_install/include/folly/gen/Base-inl.h:130
17:     #3 0x6dc8b8 in foreach<folly::gen::detail::CollectTemplate<Collection, Allocator>::compose(const folly::gen::GenImpl<Value, Source>&) const [with Value = folly::Future<nebula::raftex::cpp2::AppendLogResponse>&&; Source = folly::gen::detail::Map<nebula::raftex::RaftPart::sendHeartbeat()::<lambda(PeerHostEntry&)> >::Generator<std::pair<const std::pair<int, int>, std::shared_ptr<nebula::raftex::Host> >&, folly::gen::detail::ReferencedSource<std::unordered_map<std::pair<int, int>, std::shared_ptr<nebula::raftex::Host> >, std::pair<const std::pair<int, int>, std::shared_ptr<nebula::raftex::Host> >&>, folly::Future<nebula::raftex::cpp2::AppendLogResponse>&&>; StorageType = folly::Future<nebula::raftex::cpp2::AppendLogResponse>; Collection = std::vector<folly::Future<nebula::raftex::cpp2::AppendLogResponse>, std::allocator<folly::Future<nebula::raftex::cpp2::AppendLogResponse> > >; Container = std::vector; Allocator = std::allocator]::<lambda(folly::Future<nebula::raftex::cpp2::AppendLogResponse>&&)> > /home/dutor/Wdir/nebula/third-party/folly/_install/include/folly/gen/Base-inl.h:500
17:     #4 0x6dc8b8 in operator|<folly::Future<nebula::raftex::cpp2::AppendLogResponse>&&, folly::gen::detail::Map<nebula::raftex::RaftPart::sendHeartbeat()::<lambda(PeerHostEntry&)> >::Generator<std::pair<const std::pair<int, int>, std::shared_ptr<nebula::raftex::Host> >&, folly::gen::detail::ReferencedSource<std::unordered_map<std::pair<int, int>, std::shared_ptr<nebula::raftex::Host> >, std::pair<const std::pair<int, int>, std::shared_ptr<nebula::raftex::Host> >&>, folly::Future<nebula::raftex::cpp2::AppendLogResponse>&&>, folly::gen::detail::CollectTemplate<Collection, Allocator>::compose(const folly::gen::GenImpl<Value, Source>&) const [with Value = folly::Future<nebula::raftex::cpp2::AppendLogResponse>&&; Source = folly::gen::detail::Map<nebula::raftex::RaftPart::sendHeartbeat()::<lambda(PeerHostEntry&)> >::Generator<std::pair<const std::pair<int, int>, std::shared_ptr<nebula::raftex::Host> >&, folly::gen::detail::ReferencedSource<std::unordered_map<std::pair<int, int>, std::shared_ptr<nebula::raftex::Host> >, std::pair<const std::pair<int, int>, std::shared_ptr<nebula::raftex::Host> >&>, folly::Future<nebula::raftex::cpp2::AppendLogResponse>&&>; StorageType = folly::Future<nebula::raftex::cpp2::AppendLogResponse>; Collection = std::vector<folly::Future<nebula::raftex::cpp2::AppendLogResponse>, std::allocator<folly::Future<nebula::raftex::cpp2::AppendLogResponse> > >; Container = std::vector; Allocator = std::allocator]::<lambda(folly::Future<nebula::raftex::cpp2::AppendLogResponse>&&)> > /home/dutor/Wdir/nebula/third-party/folly/_install/include/folly/gen/Core-inl.h:264
17:     #5 0x6dc8b8 in compose<folly::Future<nebula::raftex::cpp2::AppendLogResponse>&&, folly::gen::detail::Map<nebula::raftex::RaftPart::sendHeartbeat()::<lambda(PeerHostEntry&)> >::Generator<std::pair<const std::pair<int, int>, std::shared_ptr<nebula::raftex::Host> >&, folly::gen::detail::ReferencedSource<std::unordered_map<std::pair<int, int>, std::shared_ptr<nebula::raftex::Host> >, std::pair<const std::pair<int, int>, std::shared_ptr<nebula::raftex::Host> >&>, folly::Future<nebula::raftex::cpp2::AppendLogResponse>&&> > /home/dutor/Wdir/nebula/third-party/folly/_install/include/folly/gen/Base-inl.h:2204
17:     #6 0x6dc8b8 in operator|<folly::Future<nebula::raftex::cpp2::AppendLogResponse>&&, folly::gen::detail::Map<nebula::raftex::RaftPart::sendHeartbeat()::<lambda(PeerHostEntry&)> >::Generator<std::pair<const std::pair<int, int>, std::shared_ptr<nebula::raftex::Host> >&, folly::gen::detail::ReferencedSource<std::unordered_map<std::pair<int, int>, std::shared_ptr<nebula::raftex::Host> >, std::pair<const std::pair<int, int>, std::shared_ptr<nebula::raftex::Host> >&>, folly::Future<nebula::raftex::cpp2::AppendLogResponse>&&>, folly::gen::detail::CollectTemplate<std::vector> > /home/dutor/Wdir/nebula/third-party/folly/_install/include/folly/gen/Core-inl.h:292
17:     #7 0x6dc8b8 in nebula::raftex::RaftPart::sendHeartbeat() /home/dutor/Wdir/nebula/src/raftex/RaftPart.cpp:1220
17:     #8 0x6e76f8 in nebula::raftex::RaftPart::statusPolling() /home/dutor/Wdir/nebula/src/raftex/RaftPart.cpp:831
17:     #9 0x6e7c4e in operator() /home/dutor/Wdir/nebula/src/raftex/RaftPart.cpp:840
17:     #10 0x6e7c4e in __invoke_impl<void, nebula::raftex::RaftPart::statusPolling()::<lambda()>&> /usr/include/c++/8/bits/invoke.h:60
17:     #11 0x6e7c4e in __invoke<nebula::raftex::RaftPart::statusPolling()::<lambda()>&> /usr/include/c++/8/bits/invoke.h:95
17:     #12 0x6e7c4e in __call<void> /usr/include/c++/8/functional:400
17:     #13 0x6e7c4e in operator()<> /usr/include/c++/8/functional:484
17:     #14 0x6e7c4e in _M_invoke /usr/include/c++/8/bits/std_function.h:297
17:     #15 0x6c6d31 in std::function<void ()>::operator()() const /usr/include/c++/8/bits/std_function.h:687
17:     #16 0x6c6d31 in operator() /home/dutor/Wdir/nebula/src/common/thread/GenericWorker.h:236
17:     #17 0x6c6d31 in __invoke_impl<void, nebula::thread::GenericWorker::addDelayTask(size_t, F&&, Args&& ...) [with F = nebula::raftex::RaftPart::statusPolling()::<lambda()>; Args = {}]::<lambda()>&> /usr/include/c++/8/bits/invoke.h:60
17:     #18 0x6c6d31 in __invoke<nebula::thread::GenericWorker::addDelayTask(size_t, F&&, Args&& ...) [with F = nebula::raftex::RaftPart::statusPolling()::<lambda()>; Args = {}]::<lambda()>&> /usr/include/c++/8/bits/invoke.h:95
17:     #19 0x6c6d31 in __call<void> /usr/include/c++/8/functional:400
17:     #20 0x6c6d31 in operator()<> /usr/include/c++/8/functional:484
17:     #21 0x6c6d31 in _M_invoke /usr/include/c++/8/bits/std_function.h:297
17:     #22 0x8bbd97 in std::function<void ()>::operator()() const /usr/include/c++/8/bits/std_function.h:687
17:     #23 0x8bbd97 in operator() /home/dutor/Wdir/nebula/src/common/thread/GenericWorker.cpp:125
17:     #24 0x8bbd97 in _FUN /home/dutor/Wdir/nebula/src/common/thread/GenericWorker.cpp:129
17:     #25 0xb715a0 in event_process_active_single_queue (/home/dutor/Wdir/nebula/src/raftex/test/_build/log_cas_test+0xb715a0)
17:     #26 0xb71cf6 in event_base_loop (/home/dutor/Wdir/nebula/src/raftex/test/_build/log_cas_test+0xb71cf6)
17:     #27 0x8be25f in void std::__invoke_impl<void, void (nebula::thread::GenericWorker::*&)(), nebula::thread::GenericWorker*&>(std::__invoke_memfun_deref, void (nebula::thread::GenericWorker::*&)(), nebula::thread::GenericWorker*&) /usr/include/c++/8/bits/invoke.h:73
17:     #28 0x8be25f in std::__invoke_result<void (nebula::thread::GenericWorker::*&)(), nebula::thread::GenericWorker*&>::type std::__invoke<void (nebula::thread::GenericWorker::*&)(), nebula::thread::GenericWorker*&>(void (nebula::thread::GenericWorker::*&)(), nebula::thread::GenericWorker*&) /usr/include/c++/8/bits/invoke.h:95
17:     #29 0x8be25f in void std::_Bind<void (nebula::thread::GenericWorker::*(nebula::thread::GenericWorker*))()>::__call<void, , 0ul>(std::tuple<>&&, std::_Index_tuple<0ul>) /usr/include/c++/8/functional:400
17:     #30 0x8be25f in void std::_Bind<void (nebula::thread::GenericWorker::*(nebula::thread::GenericWorker*))()>::operator()<, void>() /usr/include/c++/8/functional:484
17:     #31 0x8be25f in std::_Function_handler<void (), std::_Bind<void (nebula::thread::GenericWorker::*(nebula::thread::GenericWorker*))()> >::_M_invoke(std::_Any_data const&) /usr/include/c++/8/bits/std_function.h:297
17:     #32 0x8bddf3 in void std::__invoke_impl<void, void (*)(nebula::thread::NamedThread*, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::function<void ()> const&), nebula::thread::NamedThread*, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::_Bind<void (nebula::thread::GenericWorker::*(nebula::thread::GenericWorker*))()> >(std::__invoke_other, void (*&&)(nebula::thread::NamedThread*, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::function<void ()> const&), nebula::thread::NamedThread*&&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >&&, std::_Bind<void (nebula::thread::GenericWorker::*(nebula::thread::GenericWorker*))()>&&) /usr/include/c++/8/bits/invoke.h:60
17:     #33 0x8bddf3 in std::__invoke_result<void (*)(nebula::thread::NamedThread*, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::function<void ()> const&), nebula::thread::NamedThread*, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::_Bind<void (nebula::thread::GenericWorker::*(nebula::thread::GenericWorker*))()> >::type std::__invoke<void (*)(nebula::thread::NamedThread*, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::function<void ()> const&), nebula::thread::NamedThread*, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::_Bind<void (nebula::thread::GenericWorker::*(nebula::thread::GenericWorker*))()> >(void (*&&)(nebula::thread::NamedThread*, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::function<void ()> const&), nebula::thread::NamedThread*&&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >&&, std::_Bind<void (nebula::thread::GenericWorker::*(nebula::thread::GenericWorker*))()>&&) /usr/include/c++/8/bits/invoke.h:95
17:     #34 0x8bddf3 in decltype (__invoke((_S_declval<0ul>)(), (_S_declval<1ul>)(), (_S_declval<2ul>)(), (_S_declval<3ul>)())) std::thread::_Invoker<std::tuple<void (*)(nebula::thread::NamedThread*, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::function<void ()> const&), nebula::thread::NamedThread*, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::_Bind<void (nebula::thread::GenericWorker::*(nebula::thread::GenericWorker*))()> > >::_M_invoke<0ul, 1ul, 2ul, 3ul>(std::_Index_tuple<0ul, 1ul, 2ul, 3ul>) /usr/include/c++/8/thread:244
17:     #35 0x8bddf3 in std::thread::_Invoker<std::tuple<void (*)(nebula::thread::NamedThread*, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::function<void ()> const&), nebula::thread::NamedThread*, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::_Bind<void (nebula::thread::GenericWorker::*(nebula::thread::GenericWorker*))()> > >::operator()() /usr/include/c++/8/thread:253
17:     #36 0x8bddf3 in std::thread::_State_impl<std::thread::_Invoker<std::tuple<void (*)(nebula::thread::NamedThread*, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::function<void ()> const&), nebula::thread::NamedThread*, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::_Bind<void (nebula::thread::GenericWorker::*(nebula::thread::GenericWorker*))()> > > >::_M_run() /usr/include/c++/8/thread:196
17:     #37 0xbff282 in execute_native_thread_routine (/home/dutor/Wdir/nebula/src/raftex/test/_build/log_cas_test+0xbff282)
17:     #38 0x7f918448b58d in start_thread /usr/src/debug/glibc-2.28/nptl/pthread_create.c:486
17:     #39 0x7f91843ba512 in clone (/lib64/libc.so.6+0xfd512)
17:
17: 0x60400011de10 is located 0 bytes inside of 40-byte region [0x60400011de10,0x60400011de38)
17: freed by thread T0 here:
17:     #0 0x7f9184c4d348 in operator delete(void*) (/lib64/libasan.so.5+0xf2348)
17:     #1 0x6fdb13 in std::__detail::_Hashtable_alloc<std::allocator<std::__detail::_Hash_node<std::pair<std::pair<int, int> const, std::shared_ptr<nebula::raftex::Host> >, true> > >::_M_deallocate_nodes(std::__detail::_Hash_node<std::pair<std::pair<int, int> const, std::shared_ptr<nebula::raftex::Host> >, true>*) /usr/include/c++/8/bits/hashtable_policy.h:2113
17:     #2 0x6fdb13 in std::_Hashtable<std::pair<int, int>, std::pair<std::pair<int, int> const, std::shared_ptr<nebula::raftex::Host> >, std::allocator<std::pair<std::pair<int, int> const, std::shared_ptr<nebula::raftex::Host> > >, std::__detail::_Select1st, std::equal_to<std::pair<int, int> >, std::hash<std::pair<int, int> >, std::__detail::_Mod_range_hashing, std::__detail::_Default_ranged_hash, std::__detail::_Prime_rehash_policy, std::__detail::_Hashtable_traits<true, false, true> >::clear() /usr/include/c++/8/bits/hashtable.h:2047
17:     #3 0x6c0137 in std::unordered_map<std::pair<int, int>, std::shared_ptr<nebula::raftex::Host>, std::hash<std::pair<int, int> >, std::equal_to<std::pair<int, int> >, std::allocator<std::pair<std::pair<int, int> const, std::shared_ptr<nebula::raftex::Host> > > >::clear() /usr/include/c++/8/bits/unordered_map.h:843
17:     #4 0x6c0137 in nebula::raftex::RaftPart::stop() /home/dutor/Wdir/nebula/src/raftex/RaftPart.cpp:279
17:     #5 0x72cadd in nebula::raftex::RaftexService::stop() /home/dutor/Wdir/nebula/src/raftex/RaftexService.cpp:93
17:     #6 0x6aaf1b in nebula::raftex::finishRaft(std::vector<std::shared_ptr<nebula::raftex::RaftexService>, std::allocator<std::shared_ptr<nebula::raftex::RaftexService> > >&, std::vector<std::shared_ptr<nebula::raftex::test::TestShard>, std::allocator<std::shared_ptr<nebula::raftex::test::TestShard> > >&, std::shared_ptr<nebula::thread::GenericThreadPool>&, std::shared_ptr<nebula::raftex::test::TestShard>&) /home/dutor/Wdir/nebula/src/raftex/test/RaftexTestBase.cpp:183
17:     #7 0x696154 in nebula::raftex::RaftexTestFixture::TearDown() /home/dutor/Wdir/nebula/src/raftex/test/RaftexTestBase.h:95
17:     #8 0xb49d99 in void testing::internal::HandleExceptionsInMethodIfSupported<testing::Test, void>(testing::Test*, void (testing::Test::*)(), char const*) (/home/dutor/Wdir/nebula/src/raftex/test/_build/log_cas_test+0xb49d99)
17:
17: previously allocated by thread T0 here:
17:     #0 0x7f9184c4c470 in operator new(unsigned long) (/lib64/libasan.so.5+0xf1470)
17:     #1 0x7036e5 in __gnu_cxx::new_allocator<std::__detail::_Hash_node<std::pair<std::pair<int, int> const, std::shared_ptr<nebula::raftex::Host> >, true> >::allocate(unsigned long, void const*) /usr/include/c++/8/ext/new_allocator.h:111
17:     #2 0x7036e5 in std::allocator_traits<std::allocator<std::__detail::_Hash_node<std::pair<std::pair<int, int> const, std::shared_ptr<nebula::raftex::Host> >, true> > >::allocate(std::allocator<std::__detail::_Hash_node<std::pair<std::pair<int, int> const, std::shared_ptr<nebula::raftex::Host> >, true> >&, unsigned long) /usr/include/c++/8/bits/alloc_traits.h:436
17:     #3 0x7036e5 in std::__detail::_Hash_node<std::pair<std::pair<int, int> const, std::shared_ptr<nebula::raftex::Host> >, true>* std::__detail::_Hashtable_alloc<std::allocator<std::__detail::_Hash_node<std::pair<std::pair<int, int> const, std::shared_ptr<nebula::raftex::Host> >, true> > >::_M_allocate_node<std::pair<int, int>&, std::shared_ptr<nebula::raftex::Host> >(std::pair<int, int>&, std::shared_ptr<nebula::raftex::Host>&&) /usr/include/c++/8/bits/hashtable_policy.h:2077
17:     #4 0x7036e5 in std::pair<std::__detail::_Node_iterator<std::pair<std::pair<int, int> const, std::shared_ptr<nebula::raftex::Host> >, false, true>, bool> std::_Hashtable<std::pair<int, int>, std::pair<std::pair<int, int> const, std::shared_ptr<nebula::raftex::Host> >, std::allocator<std::pair<std::pair<int, int> const, std::shared_ptr<nebula::raftex::Host> > >, std::__detail::_Select1st, std::equal_to<std::pair<int, int> >, std::hash<std::pair<int, int> >, std::__detail::_Mod_range_hashing, std::__detail::_Default_ranged_hash, std::__detail::_Prime_rehash_policy, std::__detail::_Hashtable_traits<true, false, true> >::_M_emplace<std::pair<int, int>&, std::shared_ptr<nebula::raftex::Host> >(std::integral_constant<bool, true>, std::pair<int, int>&, std::shared_ptr<nebula::raftex::Host>&&) /usr/include/c++/8/bits/hashtable.h:1657
17:     #5 0x6c8d16 in std::pair<std::__detail::_Node_iterator<std::pair<std::pair<int, int> const, std::shared_ptr<nebula::raftex::Host> >, false, true>, bool> std::_Hashtable<std::pair<int, int>, std::pair<std::pair<int, int> const, std::shared_ptr<nebula::raftex::Host> >, std::allocator<std::pair<std::pair<int, int> const, std::shared_ptr<nebula::raftex::Host> > >, std::__detail::_Select1st, std::equal_to<std::pair<int, int> >, std::hash<std::pair<int, int> >, std::__detail::_Mod_range_hashing, std::__detail::_Default_ranged_hash, std::__detail::_Prime_rehash_policy, std::__detail::_Hashtable_traits<true, false, true> >::emplace<std::pair<int, int>&, std::shared_ptr<nebula::raftex::Host> >(std::pair<int, int>&, std::shared_ptr<nebula::raftex::Host>&&) /usr/include/c++/8/bits/hashtable.h:748
17:     #6 0x6c8d16 in std::pair<std::__detail::_Node_iterator<std::pair<std::pair<int, int> const, std::shared_ptr<nebula::raftex::Host> >, false, true>, bool> std::unordered_map<std::pair<int, int>, std::shared_ptr<nebula::raftex::Host>, std::hash<std::pair<int, int> >, std::equal_to<std::pair<int, int> >, std::allocator<std::pair<std::pair<int, int> const, std::shared_ptr<nebula::raftex::Host> > > >::emplace<std::pair<int, int>&, std::shared_ptr<nebula::raftex::Host> >(std::pair<int, int>&, std::shared_ptr<nebula::raftex::Host>&&) /usr/include/c++/8/bits/unordered_map.h:388
17:     #7 0x6c8d16 in nebula::raftex::RaftPart::start(std::vector<std::pair<int, int>, std::allocator<std::pair<int, int> > >&&) /home/dutor/Wdir/nebula/src/raftex/RaftPart.cpp:245
17:     #8 0x6af23e in nebula::raftex::setupRaft(nebula::fs::TempDir&, std::shared_ptr<nebula::thread::GenericThreadPool>&, std::vector<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::allocator<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > >&, std::vector<std::pair<int, int>, std::allocator<std::pair<int, int> > >&, std::vector<std::shared_ptr<nebula::raftex::RaftexService>, std::allocator<std::shared_ptr<nebula::raftex::RaftexService> > >&, std::vector<std::shared_ptr<nebula::raftex::test::TestShard>, std::allocator<std::shared_ptr<nebula::raftex::test::TestShard> > >&, std::shared_ptr<nebula::raftex::test::TestShard>&) /home/dutor/Wdir/nebula/src/raftex/test/RaftexTestBase.cpp:166
17:     #9 0x696422 in nebula::raftex::RaftexTestFixture::SetUp() /home/dutor/Wdir/nebula/src/raftex/test/RaftexTestBase.h:88
17:     #10 0xb49d99 in void testing::internal::HandleExceptionsInMethodIfSupported<testing::Test, void>(testing::Test*, void (testing::Test::*)(), char const*) (/home/dutor/Wdir/nebula/src/raftex/test/_build/log_cas_test+0xb49d99)
17:
17: Thread T1101 created by T0 here:
17:     #0 0x7f9184ba7043 in __interceptor_pthread_create (/lib64/libasan.so.5+0x4c043)
17:     #1 0xbff358 in std::thread::_M_start_thread(std::unique_ptr<std::thread::_State, std::default_delete<std::thread::_State> >, void (*)()) (/home/dutor/Wdir/nebula/src/raftex/test/_build/log_cas_test+0xbff358)
17:     #2 0x8bf6dc in nebula::thread::GenericThreadPool::start(unsigned long, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) /home/dutor/Wdir/nebula/src/common/thread/GenericThreadPool.cpp:29
17:     #3 0x6ad21c in nebula::raftex::setupRaft(nebula::fs::TempDir&, std::shared_ptr<nebula::thread::GenericThreadPool>&, std::vector<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::allocator<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > >&, std::vector<std::pair<int, int>, std::allocator<std::pair<int, int> > >&, std::vector<std::shared_ptr<nebula::raftex::RaftexService>, std::allocator<std::shared_ptr<nebula::raftex::RaftexService> > >&, std::vector<std::shared_ptr<nebula::raftex::test::TestShard>, std::allocator<std::shared_ptr<nebula::raftex::test::TestShard> > >&, std::shared_ptr<nebula::raftex::test::TestShard>&) /home/dutor/Wdir/nebula/src/raftex/test/RaftexTestBase.cpp:123
17:     #4 0x696422 in nebula::raftex::RaftexTestFixture::SetUp() /home/dutor/Wdir/nebula/src/raftex/test/RaftexTestBase.h:88
17:     #5 0xb49d99 in void testing::internal::HandleExceptionsInMethodIfSupported<testing::Test, void>(testing::Test*, void (testing::Test::*)(), char const*) (/home/dutor/Wdir/nebula/src/raftex/test/_build/log_cas_test+0xb49d99)
17:
17: SUMMARY: AddressSanitizer: heap-use-after-free /usr/include/c++/8/bits/hashtable_policy.h:300 in std::__detail::_Node_iterator_base<std::pair<std::pair<int, int> const, std::shared_ptr<nebula::raftex::Host> >, true>::_M_incr()
17: Shadow bytes around the buggy address:
17:   0x0c088001bb70: fa fa fd fd fd fd fd fa fa fa fd fd fd fd fd fa
17:   0x0c088001bb80: fa fa 00 00 00 00 00 fa fa fa fd fd fd fd fd fa
17:   0x0c088001bb90: fa fa fd fd fd fd fd fa fa fa 00 00 00 00 00 fa
17:   0x0c088001bba0: fa fa fd fd fd fd fd fa fa fa 00 00 00 00 02 fa
17:   0x0c088001bbb0: fa fa fd fd fd fd fd fa fa fa fd fd fd fd fd fa
17: =>0x0c088001bbc0: fa fa[fd]fd fd fd fd fa fa fa fd fd fd fd fd fd
17:   0x0c088001bbd0: fa fa fd fd fd fd fd fa fa fa 00 00 00 00 02 fa
17:   0x0c088001bbe0: fa fa fd fd fd fd fd fa fa fa fd fd fd fd fd fa
17:   0x0c088001bbf0: fa fa fd fd fd fd fd fa fa fa fd fd fd fd fd fd
17:   0x0c088001bc00: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
17:   0x0c088001bc10: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
17: Shadow byte legend (one shadow byte represents 8 application bytes):
17:   Addressable:           00
17:   Partially addressable: 01 02 03 04 05 06 07
17:   Heap left redzone:       fa
17:   Freed heap region:       fd
17:   Stack left redzone:      f1
17:   Stack mid redzone:       f2
17:   Stack right redzone:     f3
17:   Stack after return:      f5
17:   Stack use after scope:   f8
17:   Global redzone:          f9
17:   Global init order:       f6
17:   Poisoned by user:        f7
17:   Container overflow:      fc
17:   Array cookie:            ac
17:   Intra object redzone:    bb
17:   ASan internal:           fe
17:   Left alloca redzone:     ca
17:   Right alloca redzone:    cb
17: ==4393==ABORTING
1/1 Test #17: log_cas_test .....................***Failed   27.97 sec

[Wal] rollbackToLog() may cause the asynchronous flushBuffer() to run failed

Describe the bug(must be provided)
rollbackToLog and flushBuffer, the methods of FileBasedWal.cpp, are runed in different threads. The buffer being persisted by flushBuffer may have been deleted by rollbackToLog . So checking the post-condition that the persisted buffer must be the front of the buffer list will produce an error in the end of flushBuffer

How To Reproduce(must be provided)
Steps to reproduce the behavior:

  1. Step1:
  • Insert sleep(1); before std::lock_guard<std::mutex> g(buffersMutex_); in the flushBuffer method.
void FileBasedWal::flushBuffer(BufferPtr buffer) {
  ...
   // Flush the wal file
    if (currFd_ >= 0) {
        CHECK_EQ(fsync(currFd_), 0);
    }
    sleep(1);
    // Remove the buffer from the list
    {
        std::lock_guard<std::mutex> g(buffersMutex_);
        CHECK_EQ(buffer.get(), buffers_.front().get());    // may check failed
        buffers_.pop_front();
    }
   ...
}
  1. Step 2
  • Add RollbackToFile test case to FileBasedWalTest.cpp
TEST(FileBasedWal, RollbackToFile) {
    // Force to make each file 1MB, each buffer is 1MB, and there are two
    // buffers at most
    FileBasedWalPolicy policy;
    policy.fileSize = 1;
    policy.bufferSize = 1;
    policy.numBuffers = 2;

    TempDir walDir("/tmp/testWal.XXXXXX");

    // Create a new WAL, add one log and close it
    auto wal = FileBasedWal::getWal(walDir.path(), policy, flusher.get());
    EXPECT_EQ(0, wal->lastLogId());

    // Append > 1MB logs in total
    for (int i = 1; i <= 1000; i++) {
       ASSERT_TRUE(wal->appendLog(i /*id*/, 1 /*term*/, 0 /*cluster*/,
           folly::stringPrintf(kLongMsg, i)));
    }
    ASSERT_EQ(1000, wal->lastLogId());

    // Wait three second to make sure all buffers being flushed
    sleep(3);

    // Close the wal
    wal.reset();

    // Check the number of files
    auto files = FileUtils::listAllFilesInDir(walDir.path());
    ASSERT_EQ(2, files.size());

    // Now let's open it to read
    wal = FileBasedWal::getWal(walDir.path(), policy, flusher.get());
    EXPECT_EQ(1000, wal->lastLogId());

    // Appending >1M logs make sure the first buffer will be sent to
    // flusher_ thread
    for (int i = 1001; i <= 2000; i++) {
        ASSERT_TRUE(
            wal->appendLog(i /*id*/, 1 /*term*/, 0 /*cluster*/,
                folly::stringPrintf(kLongMsg, i + 1000)));
    }
    ASSERT_EQ(2000, wal->lastLogId());

    // Rollbacking 1100 logs will remove all buffer in memory
    wal->rollbackToLog(900);
    ASSERT_EQ(900, wal->lastLogId());

    sleep(2);
}
  1. Step 3
    run ./file_based_wal_test --gtest_filter="*RollbackToFile" and will get the following error.

Expected behavior

[ RUN      ] FileBasedWal.RollbackToFile
I0125 04:36:35.768205 29126 BufferFlusher.cpp:54] Buffer flusher loop started
W0125 04:36:40.071640 29124 FileBasedWal.cpp:568] Need to rollback from files. This is an expensive operation. Please make sure it is correct and necessary
F0125 04:36:41.114615 29126 FileBasedWal.cpp:401] Check failed: buffer.get() == buffers_.front().get() (0x16760a0 vs. 0x1676120)
*** Check failure stack trace: ***
    @           0x64e3cd  google::LogMessage::Fail()
    @           0x6529dc  google::LogMessage::SendToLog()
    @           0x64e0cd  google::LogMessage::Flush()
    @           0x64e8b9  google::LogMessageFatal::~LogMessageFatal()
    @           0x58477d  nebula::wal::FileBasedWal::flushBuffer()
    @           0x577d27  nebula::wal::BufferFlusher::flushLoop()
    @           0x57b0f6  _ZSt13__invoke_implIvRMN6nebula3wal13BufferFlusherEFvvERPS2_JEET_St21__invoke_memfun_derefOT0_OT1_DpOT2_
    @           0x57b06d  _ZSt8__invokeIRMN6nebula3wal13BufferFlusherEFvvEJRPS2_EENSt15__invoke_resultIT_JDpT0_EE4typeEOS9_DpOSA_
    @           0x57afda  _ZNSt5_BindIFMN6nebula3wal13BufferFlusherEFvvEPS2_EE6__callIvJEJLm0EEEET_OSt5tupleIJDpT0_EESt12_Index_tupleIJXspT1_EEE
    @           0x57af90  _ZNSt5_BindIFMN6nebula3wal13BufferFlusherEFvvEPS2_EEclIJEvEET0_DpOT_
    @           0x57ae7e  _ZSt13__invoke_implIvRSt5_BindIFMN6nebula3wal13BufferFlusherEFvvEPS3_EEJEET_St14__invoke_otherOT0_DpOT1_
    @           0x57ad90  _ZSt8__invokeIRSt5_BindIFMN6nebula3wal13BufferFlusherEFvvEPS3_EEJEENSt15__invoke_resultIT_JDpT0_EE4typeEOSB_DpOSC_
    @           0x57ac1c  _ZNSt5_BindIFS_IFMN6nebula3wal13BufferFlusherEFvvEPS2_EEvEE6__callIvJEJEEET_OSt5tupleIJDpT0_EESt12_Index_tupleIJXspT1_EEE
    @           0x57a92a  _ZNSt5_BindIFS_IFMN6nebula3wal13BufferFlusherEFvvEPS2_EEvEEclIJEvEET0_DpOT_
    @           0x57a3c7  std::_Function_handler<>::_M_invoke()
    @           0x5783f6  std::function<>::operator()()
    @           0x578223  nebula::thread::NamedThread::hook()
    @           0x57958f  _ZSt13__invoke_implIvPFvRKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEERKSt8functionIFvvEEEJS5_St5_BindIFSF_IFMN6nebula3wal13BufferFlusherEFvvEPSI_EEvEEEET_St14__invoke_otherOT0_DpOT1_
    @           0x578a85  _ZSt8__invokeIPFvRKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEERKSt8functionIFvvEEEJS5_St5_BindIFSF_IFMN6nebula3wal13BufferFlusherEFvvEPSI_EEvEEEENSt15__invoke_resultIT_JDpT0_EE4typeEOSR_DpOSS_
    @           0x57b1f7  _ZNSt6thread8_InvokerISt5tupleIJPFvRKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEERKSt8functionIFvvEEES7_St5_BindIFSH_IFMN6nebula3wal13BufferFlusherEFvvEPSK_EEvEEEEE9_M_invokeIJLm0ELm1ELm2EEEEDTcl8__invokespcl10_S_declvalIXT_EEEEESt12_Index_tupleIJXspT_EEE
    @           0x57b196  _ZNSt6thread8_InvokerISt5tupleIJPFvRKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEERKSt8functionIFvvEEES7_St5_BindIFSH_IFMN6nebula3wal13BufferFlusherEFvvEPSK_EEvEEEEEclEv
    @           0x57b17a  _ZNSt6thread11_State_implINS_8_InvokerISt5tupleIJPFvRKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEERKSt8functionIFvvEEES8_St5_BindIFSI_IFMN6nebula3wal13BufferFlusherEFvvEPSL_EEvEEEEEEE6_M_runEv
    @           0x70f0f3  execute_native_thread_routine
    @     0x7f02aa29258e  start_thread
    @     0x7f02aa1c1513  __GI___clone
Aborted (核心已转储)

A Java keyword is incorrectly defined as a variable name.

Describe the bug(must be provided)
A Java keyword is mistaken for a variable name, leads to a compilation error of the Java client interface. In line 57 of file graph.thrift.
Additional context
Source code :
union ColumnValue {
// Simple types
1: bool boolean,
.......
}
Generated code:
public static ColumnValue boolean(boolean value) {
ColumnValue x = new ColumnValue();
x.setBoolean(value);
return x;
}

More UT for FileBasedWAL

Add more UT for FileBasedWAL

CentOS 7.5

  • OS: Linux 3.10.0-862.9.1.el7.x86_64 x86_64 x86_64
  • Compliler: gcc 7.3.0
  • CPU: Intel(R) Core(TM) i7-8650U CPU @ 1.90GHz

Fix leader_election_test

When running leader_election_test sometimes will occur segment fault . as following :

17/40 Test #17: leader_election_test .............***Exception: SegFault  0.94 sec

[wal]The accessAllLogs method in wal/InMemoryLogBuffer.cpp returns an error result

Describe the bug(must be provided)
The accessAllLogs(...) mthod should return the LogID and TermID for the last log, but the TermID in the return value is -1. In line 116 of wal/InMemoryLogBuffer.cpp file

Additional context
Source code:

std::pair<LogID, TermID> InMemoryLogBuffer::accessAllLogs(
        std::function<void(LogID,
                           TermID,
                           ClusterID,
                           const std::string&)> fn) const {
    folly::RWSpinLock::ReadHolder rh(&accessLock_);
    LogID id = firstLogId_ - 1;
    TermID term = -1;
    for (auto& log : logs_) {
        ++id;
        fn(id, std::get<0>(log), std::get<1>(log), std::get<2>(log));
    }
    return std::make_pair(id, term);
}

Consensus Layer

Create consensus layer to support strong data consistency

[Wal] The deconstructor of FileBaseWal class could cause an error in some cases

Describe the bug(must be provided)

  • Exception 1: When executing the flushBuffer() method of FileBaseWal class, the buffer being persisted can not be the front buffer of the buffers_ list. in line 391 of file wal/FileBaseWal.cpp.

  • Exception 2: When executing the deconstructor of FileBaseWal class, there could be two elements left in buffers_ list rather than just one. in line 58 of file wal/FileBaseWal.cpp.

  • possible reason: In the current implementation, only when the size of buffer in buffers_ reaches maxBufferSize_ can it be sent to flusher_ thread for persistence. When executing the rollbackToLog() method, should consider the buffer that the rollback id located in and it's size less than maxBufferSize_.

  • The two exceptions are described below :

  1. When the size of the last buffer(A) of buffers_ list doesn't be fulfilled, calling the rollbackToLog() method that rolling back log Id to the some point of the last buffer(A). After that, a new buffer(B) was created and putted in the back of the buffers_ and buffers_ has two elements (A & B).

    • The buffer(A) will not receive any new logs and not be sent to fluser_ thread.
  2. Further operations will cause the following exceptions :

    • Exception 1. When new logs are more than maxBufferSize_, the buffer(B) will be sent to fluser_ and callback the flusherBuffer method will cause an error .
    • Exception 2. When execute the deconstructor of FileBaseWal class, there should be two elements( A & ?) in the buffers_ and cause an error. ( ? can be B or C ...)

How To Reproduce(must be provided)

  • Add the following two test cases to FileBaseWalTest.cpp and run them.

Exception 1:

TEST(FileBasedWal, RollbackToBufferException1) {
// Force to make each file 1MB, each buffer is 1MB, and there are two
    // buffers at most
    FileBasedWalPolicy policy;
    policy.fileSize = 1;
    policy.bufferSize = 1;
    policy.numBuffers = 2;

    TempDir walDir("/tmp/testWal.XXXXXX");

    // Create a new WAL, add one log and close it
    auto wal = FileBasedWal::getWal(walDir.path(), policy, flusher.get());
    EXPECT_EQ(0, wal->lastLogId());

    // Append < 1MB logs in total
    for (int i = 1; i <= 100; i++) {
        ASSERT_TRUE(wal->appendLog(i /*id*/, 1 /*term*/, 0 /*cluster*/,
            folly::stringPrintf(kLongMsg, i)));
    }
    ASSERT_EQ(100, wal->lastLogId());

    // Rollback 10 logs
    wal->rollbackToLog(90);
    ASSERT_EQ(90, wal->lastLogId());

    // Now let's append >1M more logs
    for (int i = 91; i <= 2100; i++) {
        ASSERT_TRUE(
            wal->appendLog(i /*id*/, 1 /*term*/, 0 /*cluster*/,
                folly::stringPrintf(kLongMsg, i + 1000)));
    }
    ASSERT_EQ(2100, wal->lastLogId());
}

Exception 2:

TEST(FileBasedWal, RollbackToBufferException2) {
    // Force to make each file 1MB, each buffer is 1MB, and there are two
    // buffers at most
    FileBasedWalPolicy policy;
    policy.fileSize = 1;
    policy.bufferSize = 1;
    policy.numBuffers = 2;

    TempDir walDir("/tmp/testWal.XXXXXX");

    // Create a new WAL, add one log and close it
    auto wal = FileBasedWal::getWal(walDir.path(), policy, flusher.get());
    EXPECT_EQ(0, wal->lastLogId());

    // Append < 1M logs in total
    for (int i = 1; i <= 100; i++) {
        ASSERT_TRUE(wal->appendLog(i /*id*/, 1 /*term*/, 0 /*cluster*/,
            folly::stringPrintf(kLongMsg, i)));
    }
    ASSERT_EQ(100, wal->lastLogId());

    // Rollback 10 logs
    wal->rollbackToLog(90);
    ASSERT_EQ(90, wal->lastLogId());

    // Wait one second to make sure all buffers being flushed
    sleep(1);
}

Expected behavior

  • run the test cases above and will get the following error:

Exception 1:

[ RUN      ] FileBasedWal.RollbackToBufferException1
W0117 09:21:04.762796  4202 FileBasedWal.cpp:412] Write buffer is exhausted, need to wait for vacancy
F0117 09:21:04.765314  4204 FileBasedWal.cpp:400] Check failed: buffer.get() == buffers_.front().get() (0xbba760 vs. 0xbba7b0)
*** Check failure stack trace: ***
    @           0x64c39d  google::LogMessage::Fail()
    @           0x6509ac  google::LogMessage::SendToLog()
    @           0x64c09d  google::LogMessage::Flush()
    @           0x64c889  google::LogMessageFatal::~LogMessageFatal()
    @           0x58274d  nebula::wal::FileBasedWal::flushBuffer()
    @           0x575d01  nebula::wal::BufferFlusher::flushLoop()
    @           0x5790d0  _ZSt13__invoke_implIvRMN6nebula3wal13BufferFlusherEFvvERPS2_JEET_St21__invoke_memfun_derefOT0_OT1_DpOT2_
    @           0x579047  _ZSt8__invokeIRMN6nebula3wal13BufferFlusherEFvvEJRPS2_EENSt15__invoke_resultIT_JDpT0_EE4typeEOS9_DpOSA_
    @           0x578fb4  _ZNSt5_BindIFMN6nebula3wal13BufferFlusherEFvvEPS2_EE6__callIvJEJLm0EEEET_OSt5tupleIJDpT0_EESt12_Index_tupleIJXspT1_EEE
    @           0x578f6a  _ZNSt5_BindIFMN6nebula3wal13BufferFlusherEFvvEPS2_EEclIJEvEET0_DpOT_
    @           0x578e58  _ZSt13__invoke_implIvRSt5_BindIFMN6nebula3wal13BufferFlusherEFvvEPS3_EEJEET_St14__invoke_otherOT0_DpOT1_
    @           0x578d6a  _ZSt8__invokeIRSt5_BindIFMN6nebula3wal13BufferFlusherEFvvEPS3_EEJEENSt15__invoke_resultIT_JDpT0_EE4typeEOSB_DpOSC_
    @           0x578bf6  _ZNSt5_BindIFS_IFMN6nebula3wal13BufferFlusherEFvvEPS2_EEvEE6__callIvJEJEEET_OSt5tupleIJDpT0_EESt12_Index_tupleIJXspT1_EEE
    @           0x578904  _ZNSt5_BindIFS_IFMN6nebula3wal13BufferFlusherEFvvEPS2_EEvEEclIJEvEET0_DpOT_
    @           0x5783a1  std::_Function_handler<>::_M_invoke()
    @           0x5763d0  std::function<>::operator()()
    @           0x5761fd  nebula::thread::NamedThread::hook()
    @           0x577569  _ZSt13__invoke_implIvPFvRKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEERKSt8functionIFvvEEEJS5_St5_BindIFSF_IFMN6nebula3wal13BufferFlusherEFvvEPSI_EEvEEEET_St14__invoke_otherOT0_DpOT1_
    @           0x576a5f  _ZSt8__invokeIPFvRKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEERKSt8functionIFvvEEEJS5_St5_BindIFSF_IFMN6nebula3wal13BufferFlusherEFvvEPSI_EEvEEEENSt15__invoke_resultIT_JDpT0_EE4typeEOSR_DpOSS_
    @           0x5791d1  _ZNSt6thread8_InvokerISt5tupleIJPFvRKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEERKSt8functionIFvvEEES7_St5_BindIFSH_IFMN6nebula3wal13BufferFlusherEFvvEPSK_EEvEEEEE9_M_invokeIJLm0ELm1ELm2EEEEDTcl8__invokespcl10_S_declvalIXT_EEEEESt12_Index_tupleIJXspT_EEE
    @           0x579170  _ZNSt6thread8_InvokerISt5tupleIJPFvRKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEERKSt8functionIFvvEEES7_St5_BindIFSH_IFMN6nebula3wal13BufferFlusherEFvvEPSK_EEvEEEEEclEv
    @           0x579154  _ZNSt6thread11_State_implINS_8_InvokerISt5tupleIJPFvRKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEERKSt8functionIFvvEEES8_St5_BindIFSI_IFMN6nebula3wal13BufferFlusherEFvvEPSL_EEvEEEEEEE6_M_runEv
    @           0x70d0c3  execute_native_thread_routine
    @     0x7fcca652a58e  start_thread
    @     0x7fcca6459513  __GI___clone
Aborted (核心已转储)

Exception 2:

[ RUN      ] FileBasedWal.RollbackToBufferException2
F0117 08:01:09.093652   523 FileBasedWal.cpp:58] Check failed: buffers_.size() == 1UL (2 vs. 1)
*** Check failure stack trace: ***
    @           0x64c02d  google::LogMessage::Fail()
    @           0x65063c  google::LogMessage::SendToLog()
    @           0x64bd2d  google::LogMessage::Flush()
    @           0x64c519  google::LogMessageFatal::~LogMessageFatal()
    @           0x58004c  nebula::wal::FileBasedWal::~FileBasedWal()
    @           0x58014c  nebula::wal::FileBasedWal::~FileBasedWal()
    @           0x58ada1  std::_Sp_counted_ptr<>::_M_dispose()
    @           0x5734a8  std::_Sp_counted_base<>::_M_release()
    @           0x572e8f  std::__shared_count<>::~__shared_count()
    @           0x572d22  std::__shared_ptr<>::~__shared_ptr()
    @           0x572d3e  std::shared_ptr<>::~shared_ptr()
    @           0x57228a  nebula::wal::FileBasedWal_RollbackToBuffer_Test::TestBody()
    @           0x64a28a  testing::internal::HandleExceptionsInMethodIfSupported<>()
    @           0x641c9a  testing::Test::Run()
    @           0x641de8  testing::TestInfo::Run()
    @           0x641ec5  testing::TestCase::Run()
    @           0x64239c  testing::internal::UnitTestImpl::RunAllTests()
    @           0x64a70a  testing::internal::HandleExceptionsInMethodIfSupported<>()
    @           0x64248f  testing::UnitTest::Run()
    @           0x572c84  RUN_ALL_TESTS()
    @           0x572466  main
    @     0x7f4d0940a413  __libc_start_main
    @           0x56e82e  _start
Aborted (核心已转储)

Add tag existence check syntax

Need a reasonable syntax sugar to check whether a tag existing on a given vertex. This could be useful when filtering the vertices

StorageClient crash

The root cause is due to the AsyncSocket is not created in the evb thread

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.