Git Product home page Git Product logo

elliptics's Introduction

Elliptics network is a fault tolerant key/value storage.
It was designed to handle all error cases starting from simple disk problems upto datacenter failures.
With default key generation policy it implements distributed hash table object storage.

The network does not use dedicated servers to maintain the metadata information,
it supports redundant objects storage.
Small to medium sized write benchmarks can be found on eblob page and in the appropriate blog section.

For more details check http://reverbrain.com/elliptics
Google group: https://groups.google.com/forum/?fromgroups=#!forum/reverbrain

elliptics's People

Contributors

aborg-dev avatar abudnik avatar agend avatar bacek avatar bioothod avatar fabiand avatar iderikon avatar ijon avatar noxiouz avatar redbaron avatar resetius avatar savetherbtz avatar scientist-st avatar shaitan avatar slon avatar snowfed avatar theinkvi avatar tigro avatar torkve avatar toshic avatar yandexbuildbot avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

elliptics's Issues

read/write locks?

Hello. Does elliptics have read/write locks or any other way to guarantee file consistency? I mean that while I writing file there are no other writes and all reads will give previous version of file? If it does then have it queue for concurrent writes? Thanks!

Tests fail

Hello!

I am trying to build elliptics debian packages from source. I have been working on a helper Dockerfile to do this.

https://github.com/visualphoenix/elliptics-builder

The pytests fail when building elliptics

=================================== FAILURES ===================================
__________________________ TestSession.test_write_cas __________________________

self = <test_session_rwr.TestSession instance at 0x2b4abb25e4d0>, server = None
simple_node = <elliptics.node.Node object at 0x2b4abb1bf680>

    def test_write_cas(self, server, simple_node):
        session = elliptics.Session(simple_node)
        session.groups = session.routes.groups()

        key = 'cas key'
        data1 = 'data 1'
        data2 = 'data 2'

        checked_write(session, key, data1)
        checked_read(session, key, data1)

        results = session.write_cas(key, data2, session.transform(data1)).get()
        check_write_results(results, len(session.groups), data2, session)
        checked_read(session, key, data2)

        results = session.write_cas(key, lambda x: '__' + x + '__').get()
>       check_write_results(results, len(session.groups), '__' + data2 + '__', session)

data1      = 'data 1'
data2      = 'data 2'
key        = 'cas key'
results    = []
self       = <test_session_rwr.TestSession instance at 0x2b4abb25e4d0>
server     = None
session    = <elliptics.session.Session object at 0x2b4abb1bf9f0>
simple_node = <elliptics.node.Node object at 0x2b4abb1bf680>

../../../tests/pytests/test_session_rwr.py:202:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

results = [], number = 3, data = '__data 2__'
session = <elliptics.session.Session object at 0x2b4abb1bf9f0>

    def check_write_results(results, number, data, session):
>       assert len(results) == number
E       assert 0 == 3
E        +  where 0 = len([])

data       = '__data 2__'
number     = 3
results    = []
session    = <elliptics.session.Session object at 0x2b4abb1bf9f0>

../../../tests/pytests/test_session_rwr.py:29: AssertionError
!!!!!!!!!!!!!!!!!!!! Interrupted: stopping after 1 failures !!!!!!!!!!!!!!!!!!!!
===================== 1 failed, 57 passed in 14.51 seconds =====================
make[4]: *** [test] Error 2
make[4]: Leaving directory `/elliptics/obj-x86_64-linux-gnu'
make[3]: *** [tests/CMakeFiles/test.dir/all] Error 2
make[3]: Leaving directory `/elliptics/obj-x86_64-linux-gnu'
make[2]: *** [tests/CMakeFiles/test.dir/rule] Error 2
make[2]: Leaving directory `/elliptics/obj-x86_64-linux-gnu'
make[1]: *** [test] Error 2
make[1]: Leaving directory `/elliptics/obj-x86_64-linux-gnu'
make: *** [debian/stamp-makefile-check] Error 2
dpkg-buildpackage: error: debian/rules build gave error exit status 2

I believe I am pulling the correct branches/tags.

To ease review, I am attempting to build the following:

# ease copy/pasting into a terminal instead of running through docker
function ENV { export $1=$2; }

ENV BLACKHOLE_VER v0.2
git clone https://github.com/3Hren/blackhole.git -b $BLACKHOLE_VER

ENV REACT_VER v2.3.1
git clone https://github.com/reverbrain/react.git -b $REACT_VER

ENV EBLOB_VER v0.22.6
git clone http://github.com/reverbrain/eblob.git -b $EBLOB_VER

ENV COCAINE_VER v0.11
git clone http://github.com/cocaine/cocaine-core.git -b $COCAINE_VER
git clone http://github.com/cocaine/cocaine-framework-python.git -b $COCAINE_VER
git clone http://github.com/cocaine/cocaine-framework-native.git -b $COCAINE_VER
git clone http://github.com/cocaine/cocaine-tools.git -b $COCAINE_VER

ENV ELLIPTICS_VER v2.25
git clone http://github.com/reverbrain/elliptics.git -b $ELLIPTICS_VER

I have tried both the ELLIPTICS branch v2.25 and tag v2.26.3.22 but both have the same issue.

If you could provide some guidance as to the correct combination of the above libraries to have the tests pass I would appreciate it!

Expiration timeout

Hello!

I'd like some objects to have expiration time (it would be session objects).
I have managed to do this with using cache and setting ioflag "cache_only":

import elliptics

elog = elliptics.Logger("/dev/stderr", 0)
cfg = elliptics.Config()

node = elliptics.Node(elog, cfg)
node.add_remote("localhost", 1025)
s = elliptics.Session(node)

s.set_ioflags(elliptics.io_flags.cache_only.conjugate())
s.write_cache("testkey", "testdata", 10)

So it created object "testkey" in cache and deleted it after 10 seconds.

It is said here http://doc.reverbrain.com/elliptics:layers?s[]=expiration that "There is possibility to remove cached entry not only from in-memory cache, but also from disk, when expiration timeout fires or when you remove object by hands."

So, how can I remove entry from disk using expiration timeout?

v2.26.3.28 core dumps

The v2.26.3.28 debian packages from http://repo.reverbrain.com/ for ubuntu 14.04 cause a core dump:

Illegal instruction     (core dumped) dnet_ioserv -c /app/setup/ioserv.json

Using ubuntu:14.04 docker image:

V=2.26.3.28

cat > /etc/apt/sources.list.d/reverbrain.list <<EOF
deb http://repo.reverbrain.com/trusty/ current/amd64/
deb http://repo.reverbrain.com/trusty/ current/all/
EOF

wget -qO- http://repo.reverbrain.com/REVERBRAIN.GPG | apt-key add -

apt-get -y update
apt-get install -y elliptics=$V  elliptics-client=$V

Setting V=2.26.3.27 works.

GDB shows:

Starting program: /usr/bin/dnet_ioserv -c /app/setup/ioserv.json
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".

Program received signal SIGILL, Illegal instruction.
0x00007ffff69a7808 in ?? () from /usr/lib/libhandystats.so

dnet_recovery question

I have ran dnet_recovery -r 10.0.0.130:1025:2 merge
but got strange result for me. Below is result before I ran dnet_recover and after. I ran dnet_recovery just after I have populated storage. I have found in key some movement. I have to notice that I use our modification for keys (https://github.com/agend/elliptics/tree/passthrought_original_id_modification). Is it normal behaviour for recovery?

Logs
Before:
[]# ./custom.sh 'eblob_index_info /opt/storage/1/data-0.0.index'
Checking elliptics on 10.0.0.130
Total records: 47468
Removed records: 0
Checking elliptics on 10.0.0.134
Total records: 52531
Removed records: 0
Checking elliptics on 10.0.0.142
Total records: 52129
Removed records: 0
Checking elliptics on 10.0.0.144
Total records: 47870
Removed records: 0

After:
[]# ./custom.sh 'eblob_index_info /opt/storage/1/data-0.0.index'
Checking elliptics on 10.0.0.130
Total records: 72323
Removed records: 25226
Checking elliptics on 10.0.0.134
Total records: 77757
Removed records: 24855
Checking elliptics on 10.0.0.142
Total records: 77278
Removed records: 25029
Checking elliptics on 10.0.0.144
Total records: 72899
Removed records: 25149

API: allow server to send its keys to different groups

Current state

Server doesn't have API to send its keys to other groups. This forces recovery process to read data from one group and send it to missing replicas.

What is wanted to be changed

If server would be told to send some keys to other groups (according to merged iterated keys) this will remove expensive data transfers (even when key is stored on the same machine as where recovery runs) and will allow external agents to watch and start recovery (currently they have to download data from 'good' replica and then write it to missing groups).

segmentation fault with elliptics 2.25.4.0

i have:
fresh ubuntu 12.04.4
fresh elliptics 2.25 from reverbrain repo
default json config from elliptics git

i got:
root@elliptics-test:~# dnet_ioserv -c /etc/elliptics/dnet_ioserv.json
2014-03-30 15:18:24.450421 0/3668/3668 ffffffff: Reopened log file
2014-03-30 15:18:24.451152 0/3668/3668 1: blob: start
2014-03-30 15:18:24.451396 0/3668/3668 2: blob: eblob_iterate_existing: finished.
2014-03-30 15:18:24.451723 0/3668/3668 4: Stat: la: 0.000000 0.040000 0.150000, mem: total: 2051600, free: 1249580, cache: 697452.
2014-03-30 15:18:24.451738 0/3670/3668 2: blob: datasort_next_defrag: defrag: next datasort is sheduled to +18446744073709551615 seconds.
2014-03-30 15:18:24.451843 0/3670/3668 2: blob: eblob_defrag_raw: Operation not permitted (1); count
2014-03-30 15:18:24.451854 0/3670/3668 2: blob: eblob_defrag_raw: defrag: complete: -2
2014-03-30 15:18:24.451861 0/3670/3668 2: blob: datasort_next_defrag: defrag: next datasort is sheduled to +18446744073709551615 seconds.
2014-03-30 15:18:24.452019 0/3668/3668 2: Elliptics starts
2014-03-30 15:18:24.452078 0/3668/3668 3: Using default check timeout (30 seconds).
2014-03-30 15:18:24.452097 0/3668/3668 3: Using default stall count (3 transactions).
2014-03-30 15:18:24.452960 0/3668/3668 2: Grew BLOCKING pool by: 0 -> 16 IO threads
2014-03-30 15:18:24.453809 0/3668/3668 2: Grew NONBLOCKING pool by: 0 -> 16 IO threads
2014-03-30 15:18:24.454053 0/3668/3668 4: New node has been created.
2014-03-30 15:18:24.454160 0/3668/3668 3: Stack size: 8388608 bytes
2014-03-30 15:18:24.454314 0/3668/3668 2: Successfully initialized notify hash table (256 entries).
2014-03-30 15:18:24.454363 0/3668/3668 3: No notify hash size provided, using default 256.
2014-03-30 15:18:24.454576 0/3710/3668 2: Started reconnection thread. Timeout: 60 seconds. Route table update every 60 seconds.
Segmentation fault (core dumped)

gdb says:
[skip]
Core was generated by `dnet_ioserv -c /etc/elliptics/dnet_ioserv.json'.
Program terminated with signal 11, Segmentation fault.
#0 0x00007f4df7eff653 in init_updater(void*) () from /usr/lib/libeblob.so.0.21

(gdb) bt
#0 0x00007f4df7eff653 in init_updater(void*) () from /usr/lib/libeblob.so.0.21
#1 0x00007f4df7effb2c in start_action () from /usr/lib/libeblob.so.0.21
#2 0x00007f4df857e537 in ioremap::cache::slru_cache_t::life_check() () from /usr/lib/libelliptics.so.2.25
#3 0x00007f4df75c8c78 in ?? () from /usr/lib/x86_64-linux-gnu/libstdc++.so.6
#4 0x00007f4df7a7ce9a in start_thread () from /lib/x86_64-linux-gnu/libpthread.so.0
#5 0x00007f4df6d7e3fd in clone () from /lib/x86_64-linux-gnu/libc.so.6
#6 0x0000000000000000 in ?? ()

documentation fixes

example/ioserv.conf:

  • should have (flags=8) description removed as it is not used

http://doc.reverbrain.com/elliptics:api-c:

  • should have "DNET_CFG_NO_META" entry removed
  • DNET_CFG_RANDOMIZE_STATES - it seems there were mention of DNET_CFG_MIX_STATES, but it was lost and now it is not clear from description that these 2 flags are mutually exclusive. Also both are using "groups" in their description, while flags use "STATES", it would be good to have some explanation that groups are states for the purpose of documentation.

example/ioserv.json:

  • "loggers.root" section should be completely removed as it is not used.

I'll add more as comments to this issue if I find any

Loggin options

Where I can find information on how to configure logging via json configuration file?

Recovering by window

How it recovers now

At the recovery phase dnet_recovery has a list of keys with replicas meta that should be synced. dnet_recovery splits this list into batches and recovers batch by batch. dnet_recovery doesn't recover keys from the next batch while current batch is not finished. It leads to waviness of the recovering process.

What is wanted to be changed

Make recovering not by batch, but by window. Window limits maximum number of keys that can be under recovering simultaneously and when a key is recovered dnet_recovery will start to recover another key from the list.

dnet_recovery hang with exception

Got exception and hang:
17 Apr 14 15:48:39 MainProcess INFO Fetching results
Exception in thread Thread-1:
Traceback (most recent call last):
File "/usr/lib64/python2.6/threading.py", line 532, in __bootstrap_inner
self.run()
File "/usr/lib64/python2.6/threading.py", line 484, in run
self.__target(_self.__args, *_self.__kwargs)
File "/usr/lib64/python2.6/multiprocessing/pool.py", line 225, in _handle_tasks
put(task)
TypeError: No to_python (by-value) converter found for C++ type: dnet_id

full log: https://gist.github.com/agend/10981559

Inconsistent remove_on_fail behaviour

remove_on_fail is just one of many options we can apply to session. If we want ot write several files with the same options (i.e. bulk upload) we usually create one session and call as many writes as we need. If there is any problems with groups presence all our writes will fails. With any other session options it will not create any problems for us, but with current remove_on_fail implementation we will get segmentation fault: remove_on_fail_impl will use exactly the same session in every remove transaction.

I propose to move sess.clone() call from remove_on_fail() to remove_on_fail_impl().

Use case: recovery examples

I can't find in documentation some simple examples/guides for standard typical situations (ex: added one node - run dnet_recovery with this options on this nodes, node was down and the restored, full group recovery, etc..) Do you have something like this?

cppdef.h has misleading unsued typedefs

New API made some entries in cppdef.h absolutely irrelevant to current code base. It seems that at least all uses of array_result_holder can be deleted. Could you do some cleanup of there, please? Its a public header and having obsolete code can be misleading for newcomers like me.

RPM build error

I have tried to to rmpbuild for elliptics-bf.rpm. but got several error with cmake at the end:

Checking for unpackaged file(s): /usr/lib/rpm/check-files /home/agent/rpmbuild/BUILDROOT/elliptics-2.22.5.1-1.fc18.x86_64
error: Installed (but unpackaged) file(s) found:
   /usr/lib/cmake/Elliptics/EllipticsConfig.cmake
   /usr/lib/cmake/Elliptics/EllipticsConfigVersion.cmake


RPM build errors:
    Installed (but unpackaged) file(s) found:
   /usr/lib/cmake/Elliptics/EllipticsConfig.cmake
   /usr/lib/cmake/Elliptics/EllipticsConfigVersion.cmake

P.S.
I had more issues with cmake's and spec file but I fixed them by my self. I will post my patch after fixing this issue.

Key expiration

Do you have any plans to implement key expiration like in MongoDB or Redis?

http://docs.mongodb.org/manual/tutorial/expire-data/
http://redis.io/commands/expire

As I can see, only memory cache expiration is implemented (#258).

If this is not in your plans, can you please suggest most convenient way to delete old keys from disk (maybe secondary indexes, or direct eblob iteration).

Use case: deleting entries older than 1 year (or even 3 years).

UPD:
It seems that session.start_iterator() with elliptics.iterator_flags.ts_range will help. Does it always return older keys first?

can't normally start server

Hello.

I get error in any server configuration on different nodes:

2014-12-17 12:58:45.384822 0000000000000000/13507/13466 DEBUG: blob: caching json statistics 'backend_id': 1
2014-12-17 12:58:45.385079 0000000000000000/13507/13466 ERROR: blob: stat: failed to open '/sys/dev/block/0:19/stat': No such file or directory [-2] 'backend_id': 1
2014-12-17 12:58:45.385208 0000000000000000/13507/13466 DEBUG: blob: json statistics has been cached 'backend_id': 1

All nodes running Debian Wheezy 7.7. I installed elliptics from your repo.
Can you explain me what does it mean?

Memory leak in dnet_recovery, slow recovery ??

I run two nodes in on group with 500k records with size 1KB. Using command as "dnet_recovery -r 127.0.0.1:1025:2 -g 2 -n 2 -L 10 merge". After 40 minutes one of dnet_recovery's process ate all memory (3GB+, but I have data only 500MB at all).
Also I have noted that it's very slow on recovery (I have experience with previous version and it was really fast).
Version 2.26.3.27

dnet_ioclient doesn't delete file from filestore

Hello!

I have elliptics v2.24.14.21 and eblob backend.
dnet_ioclient doesn't delete file from store, I've used it like this:

dnet_ioclient -r localhost:1025:2-0 -g 2 -N first -u 123.txt

Elliptics log:

2013-11-07 19:37:49.473136 0/9644/9626 2: 127.0.0.1:54401: client net TOS value set to 6
2013-11-07 19:37:49.473189 0/9644/9626 2: Accepted client 127.0.0.1:54401, socket: 13, server address: 127.0.0.1:1025, idx: 0.
2013-11-07 19:37:49.473397 0/9645/9626 2: input io queue report: elapsed: 1383838669.473 s, current size: 1, min: -1, max: 1, volume: 1
2013-11-07 19:37:49.473464 0/9643/9626 2: 127.0.0.1:54401: reverse lookup command: network version: 2.24.14.21, local version: 2.24.14.21
2013-11-07 19:37:49.473481 0/9643/9626 2: 127.0.0.1:54401: reverse lookup command: client indexes shard count: 10, server indexes shard count: 2
2013-11-07 19:37:49.473506 0/9643/9626 2: 2:020000001800...000000000000: sending address 127.0.0.1:1025 -> 127.0.0.1:54401, addr_num: 1, time-took: 8
2013-11-07 19:37:49.473539 0/9643/9626 2: 2:020000001800...000000000000: REVERSE_LOOKUP: trans: 0, cflags: 0x18, time: 86 usecs, err: 0.
2013-11-07 19:37:49.473884 0/9642/9626 2: 2:000000000000...000000000000: ROUTE_LIST: trans: 1, cflags: 0x19, time: 13 usecs, err: 0.
2013-11-07 19:37:49.474054 0/9640/9626 2: 2:000000000000...000000000000: ROUTE_LIST: trans: 2, cflags: 0x19, time: 8 usecs, err: 0.
2013-11-07 19:37:49.474947 0/9645/9626 1: Peer 127.0.0.1:54401 has disconnected.
2013-11-07 19:37:49.474984 0/9645/9626 1: 127.0.0.1:54401: resetting state: Connection reset by peer [-104]
2013-11-07 19:37:49.475026 0/9645/9626 2: Do not add reconnection addr: 127.0.0.1:54401, join state: 0x0.

At the same time, cocaine's service have managed to delete that file!

I can post logs of cocaine's successfull file deletion, just tell me if needed.

JSON config can't make working

Tried running dnet_ioserv -c elliptics.json (just copy from repository) but process exit immediately with empty output. Old format working as expected.

Link issue with 1815ae62bcdcaf62cb2be378cfbdb8888ad3f49b

When linking I see issues like:

| ../library/libelliptics.so.2.18.3.2: undefined reference to boost::thread::thread()' | ../library/libelliptics.so.2.18.3.2: undefined reference toboost::system::system_category()'
| ../library/libelliptics.so.2.18.3.2: undefined reference to vtable for boost::detail::thread_data_base' | ../library/libelliptics.so.2.18.3.2: undefined reference toboost::detail::thread_data_base::~thread_data_base()'
| ../library/libelliptics.so.2.18.3.2: undefined reference to boost::thread::join()' | ../library/libelliptics.so.2.18.3.2: undefined reference toboost::thread::detach()'
| ../library/libelliptics.so.2.18.3.2: undefined reference to typeinfo for boost::detail::thread_data_base' | ../library/libelliptics.so.2.18.3.2: undefined reference toboost::thread::start_thread()'
| ../library/libelliptics.so.2.18.3.2: undefined reference to `boost::system::generic_category()'

Can't tell if it is a general issue or just related to my linker not supporting "overlinking" (using gold, see [1])

Here is a patch that solves the issue for me:
http://pastebin.com/H8tyUcxX

[1] http://en.wikipedia.org/wiki/Gold_%28linker%29

Regards
scientist-st

Defragmentation stop

Problem description

Full defragmentation process may take too long to complete. Some work cases includes necessity of stopping defragmentation process, e.g. when we have 2 replicas total and one of them is failed due to disk corruption or something, while remaining replica is being defragmented at this time.

Solution

eblob defragmentation process consists of defragging of list of blobs, so if command 'stop defrag' came, then after completion of defragmenation of current blob just skip remaining blobs in list.

Build: commit a49d1c5 has broken build on ubuntu 12.04

Build is broken by commit: a49d1c5

make's output:

...
[  3%] Building C object library/CMakeFiles/elliptics_client.dir/crypto.c.o
[  5%] Building C object library/CMakeFiles/elliptics_client.dir/crypto/sha512.c.o
/home/ikorolev/repos/elliptics/library/crypto/sha512.c: In function ‘sha512_process_bytes’:
/home/ikorolev/repos/elliptics/library/crypto/sha512.c:388:7: warning: implicit declaration of function ‘dnet_offsetof’ [-Wimplicit-function-declaration]
/home/ikorolev/repos/elliptics/library/crypto/sha512.c:388:11: error: expected expression before ‘struct’
make[2]: *** [library/CMakeFiles/elliptics_client.dir/crypto/sha512.c.o] Error 1
make[1]: *** [library/CMakeFiles/elliptics_client.dir/all] Error 2
make: *** [all] Error 2

Invalid config file in examples in 2.25.4.9

Hi
I have some errors during the tutorial step by step install http://habrahabr.ru/company/yandex/blog/214069/
Just installed latest available elliptics instead of pointed 2.24.14.31
There is an error:

vagrant@vagrant-ubuntu-precise-64:~$ dnet_ioserv -c tst_ioserv.conf
2014-04-25 22:53:13.502441 0/2576/2576 1: cnf: failed to read config file 'tst_ioserv.conf': parser error at line 1: Expect either an object or array at root
##################################
^
+

I have used config file template from /usr/share/doc/elliptics/examples
Looks like that it should be fixed to be compatible with latest version
Thnx

slash at the end of "data" parameter makes dnet_ioserv eats 100% of cpu

Found it on version 2.25.4.5.
I accedently got slash at the end of "data" parameter, like this:

    "backends": 
            {
            "type": "blob",
            "history": "/data/disk1/history.2",
            "data": "/data/disk1/eblob.2/",
            "sync": "0",
            "blob_flags": "1",
            "blob_size": "10G"
            }

After i trying to write something (execute "s.write_data("test_key", "test_data").get()" from python) i got connection timed out and on the server side dnet_ioserv process eating cpu and throwing a logs every couple of milliseconds, like this:
2014-04-19 22:52:31.555392 0/4997/4993 2: blob: eblob_base_ctl_open: creating base: /data/disk1/eblob.2//data/disk1/eblob.2/-0.5950247
2014-04-19 22:52:31.555397 0/4997/4993 1: eblob_base_ctl_open: FAILED: -2
2014-04-19 22:52:31.555403 0/4997/4993 2: blob: eblob_base_ctl_open: creating base: /data/disk1/eblob.2//data/disk1/eblob.2/-0.5950248
2014-04-19 22:52:31.555407 0/4997/4993 1: eblob_base_ctl_open: FAILED: -2
2014-04-19 22:52:31.555414 0/4997/4993 2: blob: eblob_base_ctl_open: creating base: /data/disk1/eblob.2//data/disk1/eblob.2/-0.5950249
2014-04-19 22:52:31.555418 0/4997/4993 1: eblob_base_ctl_open: FAILED: -2
2014-04-19 22:52:31.555425 0/4997/4993 2: blob: eblob_base_ctl_open: creating base: /data/disk1/eblob.2//data/disk1/eblob.2/-0.5950250
2014-04-19 22:52:31.555429 0/4997/4993 1: eblob_base_ctl_open: FAILED: -2
2014-04-19 22:52:31.555435 0/4997/4993 2: blob: eblob_base_ctl_open: creating base: /data/disk1/eblob.2//data/disk1/eblob.2/-0.5950251
2014-04-19 22:52:31.555441 0/4997/4993 1: eblob_base_ctl_open: FAILED: -2

removing slash from "data" parameter helps, but maybe it's better to remove it while dnet_ioserv parsing it's config on startup.

Not compile

$ git clone ....
$ cmake .
....
CMake Error at cmake/Modules/locate_library.cmake:13 (MESSAGE):
react development files are required to build.
Call Stack (most recent call first):
CMakeLists.txt:96 (locate_library)

CMake Error: The following variables are used in this project, but they are set to NOTFOUND.
Please set them or make sure they are set and tested correctly in the CMake files:
COCAINE_INCLUDE_DIRS (ADVANCED)
used as include directory in directory /usr/src/elliptics/example/module_backend
used as include directory in directory /usr/src/elliptics/example/module_backend
EBLOB_INCLUDE_DIRS (ADVANCED)
used as include directory in directory /usr/src/elliptics/example/module_backend
used as include directory in directory /usr/src/elliptics/example/module_backend

-- Configuring incomplete, errors occurred!

Elliptics fails to build on Mac OS X 10.9.2

Hi!

Guys, do you support Mac OS X?

I can't build elliptics because of epoll.h absence =(

➜  build git:(lts) make
Scanning dependencies of target elliptics_cocaine
[  1%] Building CXX object srw/CMakeFiles/elliptics_cocaine.dir/srw.cpp.o
In file included from /Users/ikorolev/repos/elliptics/srw/srw.cpp:48:
/Users/ikorolev/repos/elliptics/srw/../library/elliptics.h:25:10: fatal error: 'sys/epoll.h' file not found
#include <sys/epoll.h>
         ^
1 error generated.
make[2]: *** [srw/CMakeFiles/elliptics_cocaine.dir/srw.cpp.o] Error 1
make[1]: *** [srw/CMakeFiles/elliptics_cocaine.dir/all] Error 2
make: *** [all] Error 2

Namespace in elliptics python package?

Hello!

I can't find how to use namespaces (like in dnet_ioclient -N flag or namespaces in cocaine driver) in your python package. Is it possible somehow?

I've tried to add namespace prefixes before real object names, just like it's said in dnet_ioclient manual, -N param section:
Use this namespace for operations. Namespace is a prefix that added to the filename. Assume that you have 2 different projects with the same filenames, and you need to save them in the storage. Just set the prefixes of projects (p1. and p2. for example) to this field for read/write/delete/ operations to have a different keys for the same filenames (p1.cfg.xml, p2.cfg.xml for example).

So, I've tried to write from python like

s = elliptics.Session(node)
s.write_data("p1.TESTKEY", "123123", 0)

And then I tried:

dnet_ioclient -r localhost:1025:2-0 -g 2 -N p1 -D TESTKEY
2:d4bfc217bcd6...dd9f5e04d9cc: Failed to process READ command: No such file or directory: -2

dnet_ioclient -r localhost:1025:2-0 -g 2 -D p1.TESTKEY
123123

Namespaces can't be used from python package?!

pypi python package?

Hi there,

Would it be feasible to publish the python package on pypi?

Not having it installable through pip makes testing for different python versions more complicated (specially on travis).

Thanks a lot!

Improve oplocks

Introduction

Oplocks is mechanism that is used by dnet_ioserv for ordering access to stored keys. All commands without DNET_COMMAND_FLAGS_NOLOCK go through oplocks and are executed with unique ownership of the used key.

Current oplocks behaviour and restrictions

Main structure

All locked keys are kept in dnet_locks object that consists of:

  • locked keys rbtree
  • list of preallocated entries that will be used at rbtree. Currently, dnet_ioserv preallocates 1024 entries, it means only 1024 different keys can be locked at one moment and only 1024 threads can executes commands
  • lock that is used for ordering access to dnet_locks

How does it work

When io pool thread takes a request from queue, it tries to lock the key from the request:

  • if the key is already locked - the thread will wait until the key is free
  • if the key is free:
    • if not all preallocated entries are used - the thread locks the key
    • if all preallocated entries are used - the thread handles request without lock

It means:

  • if we have intensive streams of commands with non-unique keys, part of threads will wait keys while there are requests with unlocked keys in the queue
  • dnet_locks is global for the server node, if we have a lot of backends and the total number of threads at all backends is more than 1024, we have a risk to execute a command by pass oplocks
  • locked keys don't include group_id or backend_id at which they are locked. If we have a blocking commands for the same key in 2 backends, for example, if we have 2 backends that are replicates each other, these 2 backends will compete for keys and each time one backend will wait until another unlock a key.

What are we trying to accomplish

  • exclude preallocation limit to 1024
  • escape waiting locked key by io pool thread if the queue has requests with unlocked keys
  • make locks at different backends independent

How can we accomplish it

Exclude preallocation limit to 1024

Use a container with reservation but not preallocation. Also splitting node-wide dnet_locks onto number of backend-wide dnet_locks will help.

Escape waiting locked key

Make an queue wrapper that will allows to get next request with unlocked key or will wait for it. Also this wrapper should guarantee that requests from the queue for the same key will be handled in the same order as they are presented in the queue.

Make locks at different backends independent

Move dnet_locks into dnet_backend_io. Each backend will keep only keys that is locked by this backend.

Test fail when building via rpmbuild

Log:
Scanning dependencies of target test
make[3]: Leaving directory /root/rpmbuild/BUILD/elliptics-2.25.4.5' make -f tests/CMakeFiles/test.dir/build.make tests/CMakeFiles/test.dir/build make[3]: Entering directory/root/rpmbuild/BUILD/elliptics-2.25.4.5'
cd /root/rpmbuild/BUILD/elliptics-2.25.4.5/tests && /usr/bin/python2.6 /root/rpmbuild/BUILD/elliptics-2.25.4.5/tests/run_tests.py /root/rpmbuild/BUILD/elliptics-2.25.4.5/tests /root/rpmbuild/BUILD/elliptics-2.25.4.5/tests dnet_cpp_test dnet_cpp_cache_test dnet_cpp_capped_test dnet_cpp_api_test
Running 4 tests

Start 1 of 4: dnet_cpp_test:

Set base directory: "/root/rpmbuild/BUILD/elliptics-2.25.4.5/tests/result/dnet_cpp_test"
Set cocaine run directory: "/tmp/elliptics-test-run-4aec2cf8/"
Starting 2 servers
Starting server #1
Started server #1
Starting server #2
Started server #2
Running 39 test cases...
/root/rpmbuild/BUILD/elliptics-2.25.4.5/tests/test.cpp(587): fatal error in "std::bind( test_range_request, create_session(n, {2}, 0, 0), 0, 255, 2 )": critical check read_result.size() == std::min(limit_num, int(item_count) - limit_start) failed [79 != 16]
/root/rpmbuild/BUILD/elliptics-2.25.4.5/tests/test.cpp(587): fatal error in "std::bind( test_range_request, create_session(n, {2}, 0, 0), 3, 14, 2 )": critical check read_result.size() == std::min(limit_num, int(item_count) - limit_start) failed [14 != 13]
/root/rpmbuild/BUILD/elliptics-2.25.4.5/tests/test.cpp(598): fatal error in "std::bind( test_range_request, create_session(n, {2}, 0, 0), 7, 3, 2 )": critical check { data.begin() + limit_start, data.begin() + limit_start + read_result.size() } == { read_result_vector.begin(), read_result_vector.end() } failed.
Mismatch in a position 0: range_test_data_7 != bulk_write400
Mismatch in a position 1: range_test_data_8 != bulk_write68
Mismatch in a position 2: range_test_data_9 != bulk_write22

*** 3 failures detected in test suite "Master Test Suite"

Result: Failed (201) 12.1716740131 sec

Start 2 of 4: dnet_cpp_cache_test:

Set base directory: "/root/rpmbuild/BUILD/elliptics-2.25.4.5/tests/result/dnet_cpp_cache_test"
Set cocaine run directory: "/tmp/elliptics-test-run-22369b6e/"
Starting 1 servers
Starting server #1
Started server #1
Running 4 test cases...

*** No errors detected

Result: Passed 14.0351989269 sec

Start 3 of 4: dnet_cpp_capped_test:

Set base directory: "/root/rpmbuild/BUILD/elliptics-2.25.4.5/tests/result/dnet_cpp_capped_test"
Set cocaine run directory: "/tmp/elliptics-test-run-904782/"
Starting 1 servers
Starting server #1
Started server #1
Running 1 test case...

*** No errors detected

Result: Passed 2.03252196312 sec

Start 4 of 4: dnet_cpp_api_test:

Set base directory: "/root/rpmbuild/BUILD/elliptics-2.25.4.5/tests/result/dnet_cpp_api_test"
Set cocaine run directory: "/tmp/elliptics-test-run-e0de8c7/"
Starting 1 servers
Starting server #1
Started server #1
Running 3 test cases...

*** No errors detected

Result: Passed 2.03548502922 sec

Tests are finised
make[3]: *** [tests/CMakeFiles/test] Error 1
make[3]: Leaving directory /root/rpmbuild/BUILD/elliptics-2.25.4.5' make[2]: *** [tests/CMakeFiles/test.dir/all] Error 2 make[2]: Leaving directory/root/rpmbuild/BUILD/elliptics-2.25.4.5'
make[1]: *** [tests/CMakeFiles/test.dir/rule] Error 2
make[1]: Leaving directory `/root/rpmbuild/BUILD/elliptics-2.25.4.5'
make: *** [test] Error 2
error: Bad exit status from /var/tmp/rpm-tmp.MRt9ZN (%build)

recovery broken?

Gents,
Tried last version and can't perform simple merge recovery. dnet_recovery script just stuck and nothing else. Did you test it?

How to disable handystat in configuration file

I have tried to turn off handystat and found that I can pass handystats config directly "

if (options.has("handystats_config")) {
", but i got error on initialization:

2014-10-27 20:35:14.705586 0000000000000000/8813/8813 ERROR: cnf: failed to read config file '/etc/elliptics/elliptics-node2.conf': path.options.handystats_config must be a string

I'm trying to follow advice shindo/handystats#2 (comment)

Cocaine plugin: find method

Let's create a key with some tags:

cocaine-tool call kw_storage write "'first', '3', 'data3', ['t1', 't2', 't3']"

And try to find it:

cocaine-tool call kw_storage find "'first', ['t3']"
['3']

Great! And what about several tags?

cocaine-tool call kw_storage find "'first', ['t2', 't1', 't3']"
['3', '3', '3']

Ooops! Three elements? Ok, let's play with more keys:

cocaine-tool call kw_storage write "'first', '4', 'data4', ['t1', 't2', 't3']"
cocaine-tool call kw_storage write "'first', '5', 'data5', ['t1', 't2']"

Find'em all!

cocaine-tool call kw_storage find "'first', ['t2', 't1', 't3']"
['4', '4', '4', '3', '3', '3']
cocaine-tool call kw_storage find "'first', ['t2', 't1']"
['4', '4', '3', '3', '5', '5']
cocaine-tool call kw_storage find "'first', ['t1']"
['4', '3', '5']

So, output elements count for each key = number of tags we are searching for...

And there is a question: can we somehow find all the keys from collection "first"? This

cocaine-tool call kw_storage find "'first', []"

doesn't help =(

JSON config

Could you explain new system of log level values in JSON configuration file?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.