Comments (14)
And in the local data base, losses are even greater
./bx fetch-history -c ./bx.cfg 1BrT827NCgxjctnEBdLiuDzukupwWHP1i2 | grep -c 18446744073709551615
43
from libbitcoin-database.
What is meant by "local database" above?
from libbitcoin-database.
This is if I run the libbitcoin server and in the explorer config writing local url.
But, as I said above, the problem also appears with the default config
from libbitcoin-database.
This is what I get when running against the default community server:
Note that the last two spends have no correlated outputs (receipts). This implies that the outputs are not indexed to the address 1BrT827NCgxjctnEBdLiuDzukupwWHP1i2
, which can be caused by nonstandard scripts (including segwit for more recent txs).
It is also possible that spend correlation is being impacted by correlation ID hash collision. If the server is fully-indexed and the outputs are standard (allowing for address parsing) then this is the only possible cause short of a code error.
The use of a 64 bit hash to store input-output correlations in the spend table is a longstanding weakness of libbitcoin-server. It does not impact validation as the spend table only supports server queries. In v4 the spend table will be eliminated altogether and these queries will rely on a relational surrogate key index, resolving this issue and significantly reducing server storage.
History correlation occurs in the client library. It should be straightforward to modify the client to report the collision, and rebuild bx to verify that in this case collision is the problem.
from libbitcoin-database.
As far as I remember, transactions there are standard, not multisig, etc. You can check and make sure.
I did not quite understand about the collision.
In this query, the running query to the history database. The key is the address (or its derivative). It turns out, at one point in time, the key is indexed into one cell, and at another time - in another cell?
The correlation occur is not in the client library. The client library reproduces everything that history database gives. I checked
from libbitcoin-database.
it's really in the history database. I wrote this code
history_database::list history_database::get(const short_hash& key,
size_t limit, size_t from_height) const
{
list result;
payment_record payment;
const auto start = rows_multimap_.lookup(key);
const auto records = record_multimap_iterable(rows_list_, start);
int ii = 0;
for (const auto index: records)
{
if (limit > 0 && result.size() >= limit)
break;
const auto record = rows_list_.get(index);
auto deserial = make_unsafe_deserializer(REMAP_ADDRESS(record));
// Failed reads are conflated with skipped returns.
if (payment.from_data(deserial, from_height)) {
result.push_back(payment);
} else {
std::cout << "Incorrect load data " << encode_base16(key) << std::endl;
}
ii++;
}
std::cout << "count " << ii << std::endl;
return result;
}
void history_database::store(const short_hash& key,
const payment_record& payment)
{
const auto write = [&](byte_serializer& serial)
{
payment.to_data(serial, false);
};
rows_multimap_.add_row(key, write);
if (encode_base16(key) == "770b71d77f7c90522be3a050a90545d6167d7629") {
get(key, 0, 0);
std::cout << std::endl;
}
}
Load the base and got a output:
count 36
count 37
count 38
count 38
count 39
That is, one number is repeated twice. That is, history db has time to lose the previous result between two inserts.
Could you look in the nearest code, is there no race condition or other errors?
from libbitcoin-database.
Thanks for following up. Your read from with the write may make assumptions that don't hold. add_row is atomic and should not have any problem with concurrency. I'm on my phone for another week, so it will be hard for me to review.
from libbitcoin-database.
I hung a mutex on all methods of histories db and I got rid of these errors!
In general, I looked at some places, and did not understand how the locks work there. For example:
void record_multimap<KeyType>::add_to_list(memory_ptr start_info,
write_function write)
{
const auto address = REMAP_ADDRESS(start_info);
// Critical Section
///////////////////////////////////////////////////////////////////////////
mutex_.lock_shared();
const auto old_begin = from_little_endian_unsafe<array_index>(address);
mutex_.unlock_shared();
///////////////////////////////////////////////////////////////////////////
const auto new_begin = records_.insert(old_begin);
const auto memory = records_.get(new_begin);
const auto data = REMAP_ADDRESS(memory);
auto serial_record = make_unsafe_serializer(data);
serial_record.write_delegated(write);
// The records_ and start_info remap safe pointers are in distinct files.
auto serial_link = make_unsafe_serializer(address);
// Critical Section
///////////////////////////////////////////////////////////////////////////
unique_lock lock(mutex_);
serial_link.template write_little_endian<array_index>(new_begin);
///////////////////////////////////////////////////////////////////////////
}
What happens if 2 threads get one value of old_begin and try to write it down?
Other places in the code also do not cause trust
from libbitcoin-database.
The above critical sections exist only to guarantee atomicity of the link value read and write.
I believe you are correct in that the list insertion has a flaw in that concurrent inserts can result in the orphaning of an inserted element. This would affect address history index in the case of concurrent writes of the same row (address hash) only. [There would be no node/consensus impact.]
Expansion of the critical sections beyond value atomicity may introduce a deadlock risk, so I would need to look at it more closely to ensure a fix is both safe an optimal. However otherwise expansion of the critical section above to the full method using an upgraded lock should resolve the orphaning.
I don't understand your last comnent, could you clarify?
from libbitcoin-database.
For example, this code also causes race condition:
void record_multimap<KeyType>::add_row(const KeyType& key,
write_function write)
{
const auto start_info = map_.find(key);
if (!start_info)
{
create_new(key, write);
return;
}
If 2 threads simultaneously take start_info, and see that it does not exist
from libbitcoin-database.
Yes, the row allocator needs to be protected in the same manner as the memory allocator. Thanks for your work on this. I can patch within a week.
from libbitcoin-database.
It's taken a little longer than expected, but I hope to finish this up in the next couple of days.
from libbitcoin-database.
History records are safely written, read and popped with the exception that concurrent link of a new element is a race that is won by only one of the writes in the race. Behavior is well-defined but leads to record loss in the case where there is a concurrent link of two records against the same payment address. This becomes more likely in the case of heavily-used addresses where there are multiple updates from transactions in the same block (which is the only case of concurrent write to the same hash table entry).
from libbitcoin-database.
I've resynced my local store and verified the balance as well as correct history pairings for 1BrT827NCgxjctnEBdLiuDzukupwWHP1i2 and 1966U1pjj15tLxPXZ19U48c99EJDkdXeqb.
from libbitcoin-database.
Related Issues (20)
- Flush lock slows block out protocol noticeably. HOT 1
- Set file growth rate automatically if configured (default). HOT 1
- array_index is 32 bit, will require expansion. HOT 3
- call end_write before return HOT 1
- Rename history database to address database. HOT 1
- Hash tables not safe for read while conflict delete.
- Port resolution to issues #150 and #159 to master. HOT 1
- Changes to std::hash template for db keys limits portability. HOT 1
- Primitives require file_offset and array_index type parameterization. HOT 1
- Store tx offset vs. point in address row file. HOT 2
- [master] Header storage exceedingly slow. HOT 2
- Use tx link instead of tx hash for input_point. HOT 2
- Support unconfirmed tx as output spender. HOT 2
- [master] flush_lock must be file-specific for parallel write flushing. HOT 2
- Remove hash from block_database::unindex or check that hash matches HOT 2
- What is the expected behavior for hash_table_multimap::find(Link link) behaviour with non existing link argument? HOT 7
- Add configurable operational file size minimums. HOT 1
- [master] Build warnings. HOT 3
- MacOS (all): no template named 'unary_function' in namespace 'std'.
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from libbitcoin-database.