Git Product home page Git Product logo

leveldb's Introduction

LevelDB is a fast key-value storage library written at Google that provides an ordered mapping from string keys to string values.

This repository is receiving very limited maintenance. We will only review the following types of changes.

  • Fixes for critical bugs, such as data loss or memory corruption
  • Changes absolutely needed by internally supported leveldb clients. These typically fix breakage introduced by a language/standard library/OS update

ci

Authors: Sanjay Ghemawat ([email protected]) and Jeff Dean ([email protected])

Features

  • Keys and values are arbitrary byte arrays.
  • Data is stored sorted by key.
  • Callers can provide a custom comparison function to override the sort order.
  • The basic operations are Put(key,value), Get(key), Delete(key).
  • Multiple changes can be made in one atomic batch.
  • Users can create a transient snapshot to get a consistent view of data.
  • Forward and backward iteration is supported over the data.
  • Data is automatically compressed using the Snappy compression library, but Zstd compression is also supported.
  • External activity (file system operations etc.) is relayed through a virtual interface so users can customize the operating system interactions.

Documentation

LevelDB library documentation is online and bundled with the source code.

Limitations

  • This is not a SQL database. It does not have a relational data model, it does not support SQL queries, and it has no support for indexes.
  • Only a single process (possibly multi-threaded) can access a particular database at a time.
  • There is no client-server support builtin to the library. An application that needs such support will have to wrap their own server around the library.

Getting the Source

git clone --recurse-submodules https://github.com/google/leveldb.git

Building

This project supports CMake out of the box.

Build for POSIX

Quick start:

mkdir -p build && cd build
cmake -DCMAKE_BUILD_TYPE=Release .. && cmake --build .

Building for Windows

First generate the Visual Studio 2017 project/solution files:

mkdir build
cd build
cmake -G "Visual Studio 15" ..

The default default will build for x86. For 64-bit run:

cmake -G "Visual Studio 15 Win64" ..

To compile the Windows solution from the command-line:

devenv /build Debug leveldb.sln

or open leveldb.sln in Visual Studio and build from within.

Please see the CMake documentation and CMakeLists.txt for more advanced usage.

Contributing to the leveldb Project

This repository is receiving very limited maintenance. We will only review the following types of changes.

  • Bug fixes
  • Changes absolutely needed by internally supported leveldb clients. These typically fix breakage introduced by a language/standard library/OS update

The leveldb project welcomes contributions. leveldb's primary goal is to be a reliable and fast key/value store. Changes that are in line with the features/limitations outlined above, and meet the requirements below, will be considered.

Contribution requirements:

  1. Tested platforms only. We generally will only accept changes for platforms that are compiled and tested. This means POSIX (for Linux and macOS) or Windows. Very small changes will sometimes be accepted, but consider that more of an exception than the rule.

  2. Stable API. We strive very hard to maintain a stable API. Changes that require changes for projects using leveldb might be rejected without sufficient benefit to the project.

  3. Tests: All changes must be accompanied by a new (or changed) test, or a sufficient explanation as to why a new (or changed) test is not required.

  4. Consistent Style: This project conforms to the Google C++ Style Guide. To ensure your changes are properly formatted please run:

    clang-format -i --style=file <file>
    

We are unlikely to accept contributions to the build configuration files, such as CMakeLists.txt. We are focused on maintaining a build configuration that allows us to test that the project works in a few supported configurations inside Google. We are not currently interested in supporting other requirements, such as different operating systems, compilers, or build systems.

Submitting a Pull Request

Before any pull request will be accepted the author must first sign a Contributor License Agreement (CLA) at https://cla.developers.google.com/.

In order to keep the commit timeline linear squash your changes down to a single commit and rebase on google/leveldb/main. This keeps the commit timeline linear and more easily sync'ed with the internal repository at Google. More information at GitHub's About Git rebase page.

Performance

Here is a performance report (with explanations) from the run of the included db_bench program. The results are somewhat noisy, but should be enough to get a ballpark performance estimate.

Setup

We use a database with a million entries. Each entry has a 16 byte key, and a 100 byte value. Values used by the benchmark compress to about half their original size.

LevelDB:    version 1.1
Date:       Sun May  1 12:11:26 2011
CPU:        4 x Intel(R) Core(TM)2 Quad CPU    Q6600  @ 2.40GHz
CPUCache:   4096 KB
Keys:       16 bytes each
Values:     100 bytes each (50 bytes after compression)
Entries:    1000000
Raw Size:   110.6 MB (estimated)
File Size:  62.9 MB (estimated)

Write performance

The "fill" benchmarks create a brand new database, in either sequential, or random order. The "fillsync" benchmark flushes data from the operating system to the disk after every operation; the other write operations leave the data sitting in the operating system buffer cache for a while. The "overwrite" benchmark does random writes that update existing keys in the database.

fillseq      :       1.765 micros/op;   62.7 MB/s
fillsync     :     268.409 micros/op;    0.4 MB/s (10000 ops)
fillrandom   :       2.460 micros/op;   45.0 MB/s
overwrite    :       2.380 micros/op;   46.5 MB/s

Each "op" above corresponds to a write of a single key/value pair. I.e., a random write benchmark goes at approximately 400,000 writes per second.

Each "fillsync" operation costs much less (0.3 millisecond) than a disk seek (typically 10 milliseconds). We suspect that this is because the hard disk itself is buffering the update in its memory and responding before the data has been written to the platter. This may or may not be safe based on whether or not the hard disk has enough power to save its memory in the event of a power failure.

Read performance

We list the performance of reading sequentially in both the forward and reverse direction, and also the performance of a random lookup. Note that the database created by the benchmark is quite small. Therefore the report characterizes the performance of leveldb when the working set fits in memory. The cost of reading a piece of data that is not present in the operating system buffer cache will be dominated by the one or two disk seeks needed to fetch the data from disk. Write performance will be mostly unaffected by whether or not the working set fits in memory.

readrandom  : 16.677 micros/op;  (approximately 60,000 reads per second)
readseq     :  0.476 micros/op;  232.3 MB/s
readreverse :  0.724 micros/op;  152.9 MB/s

LevelDB compacts its underlying storage data in the background to improve read performance. The results listed above were done immediately after a lot of random writes. The results after compactions (which are usually triggered automatically) are better.

readrandom  : 11.602 micros/op;  (approximately 85,000 reads per second)
readseq     :  0.423 micros/op;  261.8 MB/s
readreverse :  0.663 micros/op;  166.9 MB/s

Some of the high cost of reads comes from repeated decompression of blocks read from disk. If we supply enough cache to the leveldb so it can hold the uncompressed blocks in memory, the read performance improves again:

readrandom  : 9.775 micros/op;  (approximately 100,000 reads per second before compaction)
readrandom  : 5.215 micros/op;  (approximately 190,000 reads per second after compaction)

Repository contents

See doc/index.md for more explanation. See doc/impl.md for a brief overview of the implementation.

The public interface is in include/leveldb/*.h. Callers should not include or rely on the details of any other header files in this package. Those internal APIs may be changed without warning.

Guide to header files:

  • include/leveldb/db.h: Main interface to the DB: Start here.

  • include/leveldb/options.h: Control over the behavior of an entire database, and also control over the behavior of individual reads and writes.

  • include/leveldb/comparator.h: Abstraction for user-specified comparison function. If you want just bytewise comparison of keys, you can use the default comparator, but clients can write their own comparator implementations if they want custom ordering (e.g. to handle different character encodings, etc.).

  • include/leveldb/iterator.h: Interface for iterating over data. You can get an iterator from a DB object.

  • include/leveldb/write_batch.h: Interface for atomically applying multiple updates to a database.

  • include/leveldb/slice.h: A simple module for maintaining a pointer and a length into some other byte array.

  • include/leveldb/status.h: Status is returned from many of the public interfaces and is used to report success and various kinds of errors.

  • include/leveldb/env.h: Abstraction of the OS environment. A posix implementation of this interface is in util/env_posix.cc.

  • include/leveldb/table.h, include/leveldb/table_builder.h: Lower-level modules that most clients probably won't use directly.

leveldb's People

Contributors

allangj avatar andyli029 avatar caodhuan avatar cmumford avatar davidair avatar davidsgrogan avatar ehds avatar felipecrv avatar ghemawat avatar ivanabc avatar jl0x61 avatar lingbin avatar m3bm3b avatar maplefu avatar mikewiacek avatar myccccccc avatar neal-zhu avatar paulirish avatar pkasting avatar proller avatar pwnall avatar reillyeon avatar rex4539 avatar ssiddhartha avatar tzik avatar usurai avatar wankai avatar wineway avatar wzk784533 avatar zmodem avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

leveldb's Issues

Support alternate compilers.

Original issue 46 created by [email protected] on 2011-10-10T21:20:20.000Z:

What steps will reproduce the problem?

  1. CXX=CC gmake

What is the expected output? What do you see instead?

CC should be used as the C++ compiler. g++ is instead used.

What version of the product are you using? On what operating system?

git tip. Solaris.

Please provide any additional information below.

Need a way to specify Snappy install path

Original issue 26 created by electrum on 2011-08-01T00:06:28.000Z:

There doesn't seem to be a way to tell the build system that Snappy is installed in a non-standard location (e.g. in /opt/local using MacPorts).

version_set_test.cc missing

Original issue 11 created by ashoemaker on 2011-06-22T03:19:13.000Z:

r32 added version_set_test to the Makefile, though db/version_set_test.cc is not checked in:

make: *** No rule to make target db/version_set_test.o', needed byversion_set_test'. Stop.

High CPU usage after reopen the leveldb

Original issue 14 created by private188 on 2011-06-29T07:00:07.000Z:

What steps will reproduce the problem?

  1. expand the default target file size and the max bytes for level as below:

static const int kTargetFileSize = 64 << 20;// 2 * 1048576;
double result = 256 * 1048576.0;// 10 * 1048576.0;

  1. insert about 240M data, since the maxBtyesForLevel is 256M, so all the data can reside on level-0

$ ls -chs cache/db
total 206M
4.0K MANIFEST-000004 206M 000005.log 4.0K CURRENT 0 LOCK 4.0K LOG 4.0K LOG.old

  1. restart my program which linked with libleveldb.a, but the log file was still enlarging and my program whose name is gate_cache occupied high CPU about 128% with dual-core

$ ll -chs cache/db
total 466M
4.0K drwxr-xr-x 2 peter peter 4.0K 2011-06-29 14:29 ./
4.0K drwxr-xr-x 6 peter peter 4.0K 2011-06-29 14:18 ../
205M -rw-r--r-- 1 peter peter 205M 2011-06-29 14:27 000007.sst
30M -rw-r--r-- 1 peter peter 30M 2011-06-29 14:28 000010.sst
61M -rw-r--r-- 1 peter peter 61M 2011-06-29 14:36 000011.log
66M -rw-r--r-- 1 peter peter 66M 2011-06-29 14:29 000012.sst
65M -rw-r--r-- 1 peter peter 65M 2011-06-29 14:29 000013.sst
42M -rw-r--r-- 1 peter peter 42M 2011-06-29 14:29 000014.sst
4.0K -rw-r--r-- 1 peter peter 16 2011-06-29 14:28 CURRENT
4.0K -rw-r--r-- 1 peter peter 64K 2011-06-29 14:29 LOG
4.0K -rw-r--r-- 1 peter peter 64K 2011-06-29 14:28 LOG.old
4.0K -rw-r--r-- 1 peter peter 64K 2011-06-29 14:29 MANIFEST-000009

top - 14:36:50 up 9 days, 4:02, 2 users, load average: 7.01, 7.59, 5.01
Tasks: 214 total, 2 running, 211 sleeping, 1 stopped, 0 zombie
Cpu(s): 19.7%us, 74.2%sy, 0.0%ni, 5.6%id, 0.0%wa, 0.0%hi, 0.5%si, 0.0%st
Mem: 3992596k total, 3635948k used, 356648k free, 91076k buffers
Swap: 4002808k total, 147492k used, 3855316k free, 916524k cached

PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
1482 peter 20 0 1756m 1.1g 1756 S 128 28.3 10:37.14 gate_cache

What is the expected output? What do you see instead?

  1. low cpu occupancy rate, since if I do not close my program, the cpu is low.
  2. the log file should not continue expanding since I inserted the same data later on.

What version of the product are you using? On what operating system?

trunk in the SVN

Please provide any additional information below.

I do not use the c++0x, but a port to the older GCC 4.1 version by using the libatomic_ops from HP.

Expecting for your feedback

reappearing "ghost" key after 17 steps

Original issue 44 created by josephwnorton on 2011-10-03T16:13:29.000Z:

I'm testing an Erlang-based API for leveldb via an Erlang NIF written in C++. The test model is written in Erlang with the help of the test tool QuickCheck. The test model and test tool have found a failing minimal test case of 17 steps that appears (but not proven yet) to be an issue with leveldb.

I tried to manually create a minimal failing example that in pure C++. Unfortunately, I am unable to reproduce the issue with a pure C++ test case.

I attached the failing counterexample case and leveldb data directory after closing of the database. I'm hoping the leveldb authors might be able to pinpoint the issue or provide some instructions on how to troubleshoot this issue.

What steps will reproduce the problem?

  1. open new database
  2. put key1 and val1
  3. close database
  4. open database
  5. delete key2
  6. delete key1
  7. close database
  8. open database
  9. delete key2
  10. close database
  11. open database
  12. put key3 and val1
  13. close database
  14. open database
  15. close database
  16. open database
  17. seek first

What is the expected output? key3 at step 17

What do you see instead? key1 at step 17

> foobar:test().

<<10,0,0,0,12>>/'$end_of_table': [{obj,6,0}]

'$end_of_table'/'$end_of_table': []

'$end_of_table'/'$end_of_table': []

<<18,193,216,96,0,8>>/'$end_of_table': [{obj,6,0}]

<<18,193,216,96,0,8>>/'$end_of_table': [{obj,6,0}]

<<10,0,0,0,12>>/<<18,193,216,96,0,8>>: [{obj,6,0},{obj,6,0}]
ok

What version of the product are you using? On what operating system?

commit 26db4d9
Author: Hans Wennborg <[email protected]>
Date: Mon Sep 26 17:37:09 2011 +0100

The issue repeats on MacOS X Lion and Fedora 15.

Please provide any additional information below.

The following shutdown sequence is performed at the time of closing the database:

leveldb::WriteOptions db_write_options;
leveldb::WriteBatch batch;
leveldb::Status status;

db_write_options.sync = true;
status = h-&gt;db-&gt;Write(db_write_options, &amp;batch);
if (!status.ok()) {
    return MAKEBADARG(env, status);
}

delete h-&gt;db;
h-&gt;db = NULL;

Is it possible to seek to the first key less than a given key

Original issue 30 created by john.carrino on 2011-08-13T18:34:17.000Z:

I would like to find the first key that is less than a given key. I can only think of two ways to achieve this goal and they both fail.

  1. seek to key then go back one. This fails because if the key you want is the last key, then you have an invalid iterator and you get a segfault.
  2. seek to key, then check if iterator is invalid (because we are past the end), then set to seekToLast and do the read. This fails because if a key comes in before the seekToLast, then you get the wrong value.

I have attached tests to try each one. Is there an easier way I don't know about?

TEST(DBTest, IterPastEndThenPrev) {
ASSERT_OK(Put("a", "va"));
Iterator* iter = db_->NewIterator(ReadOptions());

iter->Seek("b");
ASSERT_EQ(IterStatus(iter), "(invalid)");
iter->Prev();
ASSERT_EQ(IterStatus(iter), "a->va");
}

TEST(DBTest, IterSnapshotViewSeekToEnd) {
ASSERT_OK(Put("a", "va"));
Iterator* iter = db_->NewIterator(ReadOptions());

iter->Seek("b");
ASSERT_EQ(IterStatus(iter), "(invalid)");
ASSERT_OK(Put("c", "vc"));
iter->SeekToLast();
ASSERT_EQ(IterStatus(iter), "a->va");
}

CorruptionTest.MissingDescriptor fails on Ubuntu 11.04 w/ snappy in release mode

Original issue 16 created by ashoemaker on 2011-06-29T11:39:14.000Z:

A specific combination of factors causes corruption_test to consistently fail:

  • Ubuntu 11.04 (test passes on OS X)
  • leveldb r34 (test passes on r33 with or without snappy)
  • snappy enabled (test passes with snappy disabled)
  • optimized build (removing -DNDEBUG from the build causes the test to pass)

The Makefile patch from issue 15 was applied in order to build.

Here is the output:

==== Test CorruptionTest.Recovery
expected=100..100; got=100; bad_keys=0; bad_values=0; missed=0
expected=36..36; got=36; bad_keys=0; bad_values=0; missed=64
==== Test CorruptionTest.RecoverWriteError
==== Test CorruptionTest.NewFileErrorDuringWrite
==== Test CorruptionTest.TableFile
expected=99..99; got=99; bad_keys=0; bad_values=1; missed=0
==== Test CorruptionTest.TableFileIndexData
expected=5000..9999; got=7953; bad_keys=0; bad_values=0; missed=5
==== Test CorruptionTest.MissingDescriptor
expected=1000..1000; got=995; bad_keys=0; bad_values=0; missed=5
db/corruption_test.cc:114: failed: 1000 <= 995

Let me know if I can provide more context to help with a repro.

Fails to compile on FreeBSD

Original issue 22 created by dforsythe on 2011-07-23T08:14:20.000Z:

What steps will reproduce the problem?

  1. gmake

What is the expected output? What do you see instead?
Expect working bins and tests. Build fails because build_detect_platform doesn't recognize FreeBSD.

What version of the product are you using? On what operating system?
rev 39, FreeBSD 9.0-CURRENT

The attached patch fixes the build.

static_cast char to unsigned on BIG_ENDIAN platforms may cause errors

Original issue 35 created by Alexander.Klishin on 2011-08-23T06:31:19.000Z:

util/coding_test may fail on some BIG_ENDIAN platforms

uname -ms
HP-UX ia64

gcc version 4.3.3 (GCC)

micro test:
====cast.cpp ====

include <stdio.h>

int main(int ac, char *av[]) {
char c = -1;
printf("cast1=%u cast2=%u",
static_cast<unsigned int>(c), //!!! wrong conversion
static_cast<unsigned int>(static_cast<unsigned char>(c)));

}

g++ cast.cpp -o cast

./cast

cast1=4294967295 cast2=255

Problem in file "util\coding.h"
inline uint32_t DecodeFixed32(const char* ptr) {
...
return ((static_cast<uint32_t>(ptr[0]))
| (static_cast<uint32_t>(ptr[1]) << 8)
| (static_cast<uint32_t>(ptr[2]) << 16)
| (static_cast<uint32_t>(ptr[3]) << 24));
}

Should be:
return ((static_cast<uint32_t>(static_cast<unsigned char>(ptr[0])))
| (static_cast<uint32_t>(static_cast<unsigned char>(ptr[1])) << 8)
| (static_cast<uint32_t>(static_cast<unsigned char>(ptr[2])) << 16)
| (static_cast<uint32_t>(static_cast<unsigned char>(ptr[3])) << 24));

Add GNU/kFreeBSD support

Original issue 38 created by quadrispro on 2011-09-05T07:56:22.000Z:

Hi!

The attached patch will allow leveldb to compile on kFreeBSD platforms.

Regards,

Does not compile with Sun Studio 12

Original issue 17 created by [email protected] on 2011-07-06T14:01:14.000Z:

What steps will reproduce the problem?

  1. make CC=CC (on Solaris)

What is the expected output? What do you see instead?

Working binaries and tests are expected. Failure to compile happens instead.

What version of the product are you using? On what operating system?

trunk r36

Please provide any additional information below.

helpers/memenv/memenv_test.cc: Use correct datatype

Original issue 41 created by [email protected] on 2011-09-26T09:08:07.000Z:

While running make check you'll most likely run into a build error since file_size expects uint64_t but seems to be size_t. Here's the diff to make it build again:

diff --git a/helpers/memenv/memenv_test.cc b/helpers/memenv/memenv_test.cc
index 30b0bb0..3791dc3 100644
--- a/helpers/memenv/memenv_test.cc
+++ b/helpers/memenv/memenv_test.cc
@@ -26,7 +26,7 @@ class MemEnvTest {
};

TEST(MemEnvTest, Basics) {

  • size_t file_size;
  • uint64_t file_size;
    WritableFile* writable_file;
    std::vector<std::string> children;

unknown size of return string of leveldb_property_value()

Original issue 33 created by theo.bertozzi on 2011-08-20T15:08:57.000Z:

  1. call leveldb_property_value() with "leveldb.stats"
  2. strlen(leveldb_property_value("leveldb.stats"))

strlen doesn't find any '\0' terminator.
CopyString() is implemented as malloc(size) - memcpy().

Add size_t *size to leveldb_property_value() like leveldb_get() or return a real null-terminated string.

Bug on Iterator.Prev()

Original issue 29 created by david.yu.ftw on 2011-08-12T06:15:16.000Z:

The testcase is attached.

Here's the output:
***** Running db_test
==== Test DBTest.Empty
==== Test DBTest.ReadWrite
==== Test DBTest.PutDeleteGet
==== Test DBTest.GetFromImmutableLayer
==== Test DBTest.GetFromVersions
==== Test DBTest.GetSnapshot
==== Test DBTest.GetLevel0Ordering
==== Test DBTest.GetOrderedByLevels
==== Test DBTest.GetPicksCorrectFile
==== Test DBTest.IterEmpty
==== Test DBTest.IterSingle
==== Test DBTest.IterMulti
==== Test DBTest.IterMultiWithDelete
db/db_test.cc:497: failed: b->va == a->va
make: *** [check] Error 1

add support for closing a database without having to delete the c++ object

Original issue 48 created by josephwnorton on 2011-10-28T15:23:18.000Z:

As I understand, leveldb implements it own locking mechanism for thread safety. This works fine except for when wanting to close the database by deleting the c++ object.

For (selfish) performance reasons, I prefer not to provide my own mutex to protect a leveldb instance embedded within an application.

Can you consider adding a new API call that synchronously closes all of the database's file and memory resources? This would allow deletion of the database's c++ object to be done asynchronously and would allow an application the choice of protecting or not protecting leveldb by it's own mutex mechanism.

db_bench crashing on large number of entries

Original issue 4 created by teoryn on 2011-05-10T23:17:44.000Z:

I ran a modified db_bench with --num=10 to --num=10^9 (by multiples of 10) to test the scaling of leveldb.
My modifications statically link snappy into the posix version of leveldb; I've attached the diff.
At --num=10^8 and --num=10^9 problems start occuring. stat.txt which shows the output of all the runs.

10^8 was killed during the overwrite benchmark at 23300000 ops, but I was not able to reproduce the error using 'db_bench --num=100000000 --benchmarks=overwrite'.

10^9 generated the following error during fillrandom:
put error: IO error: /tmp/dbbench/006484.log: Too many open files
Running 'db_bench --num=1000000000 --benchmarks=fillrandom' reproduced the error.

Built (patched) r27 of leveldb and r35 of snappy with:
$ gcc --version
gcc (Ubuntu/Linaro 4.4.4-14ubuntu5) 4.4.5

iOS build fails with XCode 4

Original issue 7 created by ashoemaker on 2011-06-06T20:44:36.000Z:

"make PLATFORM=IOS" fails with XCode 4 installed:
g++-4.2: error trying to exec '/usr/bin/arm-apple-darwin10-g++-4.2.1': execvp: No such file or directory

This patch updates the Makefile to use the compilers provided by the installed iOS SDK.

coding.h belongs in the include folder

Original issue 13 created by DonovanHide on 2011-06-25T01:01:36.000Z:

The functions available in the coding.h file are generally useful for setting keys and values and should be exposed to the user of the library. I'm guessing you might say the dependency on port.h prohibits this bu the Big Endian/Little Endian condition could be broken out.

Additionally, is it intended that a "make install" target will eventually be made?

corruption_test fails

Original issue 9 created by winwasher on 2011-06-19T09:56:51.000Z:

Hi. I have some problems installing leveldb.

I run make check and corruption_test test fails. Running it under gdb i get

(gdb) run
Starting program: /home/spyros/src/leveldb/corruption_test
[Thread debugging using libthread_db enabled]
==== Test CorruptionTest.Recovery

Program received signal SIGSEGV, Segmentation fault.
0x0805d4b2 in leveldb::SkipList<char const_, leveldb::MemTable::KeyComparator>::FindGreaterOrEqual(char const_ const&, leveldb::SkipList<char const_, leveldb::MemTable::KeyComparator>::Node_*) const ()
(gdb) bt

 0 0x0805d4b2 in leveldb::SkipList<char const_, leveldb::MemTable::KeyComparator>::FindGreaterOrEqual(char const_ const&, leveldb::SkipList<char const_, leveldb::MemTable::KeyComparator>::Node_*) const ()

 1 0x0805d5bc in leveldb::SkipList<char const_, leveldb::MemTable::KeyComparator>::Insert(char const_ const&) ()

 2 0x0805cf2a in leveldb::MemTable::Add(unsigned long long, leveldb::ValueType, leveldb::Slice const&, leveldb::Slice const&) ()

 3 0x08069779 in leveldb::(anonymous namespace)::MemTableInserter::Put(leveldb::Slice const&, leveldb::Slice const&) ()

 4 0x08069a9d in leveldb::WriteBatch::Iterate(leveldb::WriteBatch::Handler*) const ()

 5 0x08069b40 in leveldb::WriteBatchInternal::InsertInto(leveldb::WriteBatch const_, leveldb::MemTable_) ()

 6 0x0805685e in leveldb::DBImpl::Write(leveldb::WriteOptions const&, leveldb::WriteBatch*) ()

 7 0x0804f090 in leveldb::CorruptionTest::Build(int) ()

 8 0x0804cd6a in leveldb::_Test_Recovery::_Run() ()

 9 0x0805053a in leveldb::_Test_Recovery::_RunIt() ()

 10 0x08074b4e in leveldb::test::RunAllTests() ()

 11 0x0804a84b in main ()

I get an analogous error if i run this minimal snippet

int
main(int argc, char* argv[])
{
leveldb::DB* db;
leveldb::Options options;
options.create_if_missing = true;
leveldb::Status s = leveldb::DB::Open(options, "/home/spyros/src/async/leveldb/keystore", &db);
leveldb::WriteOptions wopts;
wopts.sync = true;
if (s.ok()) s = db->Put(wopts, "Hello", "World");
std::string val;
if (s.ok()) s = db->Get(leveldb::ReadOptions(), "Hello", &val);
if (s.ok())
std::cerr << "Value of 'Hello' is " << val << std::endl;
else
std::cerr << "DB error: " << s.ToString() << std::endl;
delete db;
}

the gdb output in this case is
(gdb) run
Starting program: /home/spyros/src/async/minimal
[Thread debugging using libthread_db enabled]

Program received signal SIGSEGV, Segmentation fault.
0x08056cd2 in leveldb::SkipList<char const_, leveldb::MemTable::KeyComparator>::FindGreaterOrEqual(char const_ const&, leveldb::SkipList<char const_, leveldb::MemTable::KeyComparator>::Node_*) const ()
(gdb) bt

 0 0x08056cd2 in leveldb::SkipList<char const_, leveldb::MemTable::KeyComparator>::FindGreaterOrEqual(char const_ const&, leveldb::SkipList<char const_, leveldb::MemTable::KeyComparator>::Node_*) const ()

 1 0x08056d9e in leveldb::MemTableIterator::Seek(leveldb::Slice const&) ()

 2 0x08053da1 in leveldb::(anonymous namespace)::DBIter::Seek(leveldb::Slice const&) ()

 3 0x0804df82 in leveldb::DBImpl::Get(leveldb::ReadOptions const&, leveldb::Slice const&, std::string*) ()

 4 0x0804a4d3 in main ()

I use ubuntu 10.04 LTS 32-bit gcc 4.4.3 and gdb 7.1, leveldb revision 31

Any suggestions for this setup?

Thank you
Spyros

Leveldb's compaction will not start if put the same key-value data

Original issue 19 created by [email protected] on 2011-07-13T10:50:26.000Z:

What steps will reproduce the problem?

  1. I write a "test.cc" file, put with the same key

{{{

include <iostream>

include "leveldb/db.h"

include "leveldb/comparator.h"

include "leveldb/write_batch.h"

include "leveldb/cache.h"

int main(int argc, char* argv[])
{
leveldb::DB* db = NULL;
leveldb::Options options;
/////////////////////////////
options.create_if_missing = true;
options.compression = leveldb::kSnappyCompression;
options.block_cache = leveldb::NewLRUCache(512 * 1024 * 1024);
/////////////////////////////
leveldb::Status status = leveldb::DB::Open(options, "./testdb", &db);
if(!status.ok())
std::cout << "open db failed!" << std::endl;
//assert(status.ok());
std::string value1;
leveldb::Status s;
int i = atoi(argv[1]);
while(--i)
s = db->Put(leveldb::WriteOptions(), "rewinx", "aaaaaaaaaaaaaaaaaaaaaaaaaaaaa");
}

}}}

  1. compile:
    g++ -o test test.cc -I./leveldb/include -L./leveldb -lleveldb -lpthread -O3
  2. run (put 10000000 times with the same key):
    ./test 10000000

What is the expected output? What do you see instead?
I expect the compaction will start, but not. If run "./test 10000000" for more times, the leveldb will creates more and more new sst files, until the disk is full.

What version of the product are you using? On what operating system?
Vwesion: svn checkout on July 13, 2011
Operating System: CentOS 5.4 64bit

warning in cache.cc

Original issue 20 created by [email protected] on 2011-07-13T23:46:56.000Z:

In the Chrome build:

third_party/leveldb/util/cache.cc(115) : warning C4018: '<' : signed/unsigned mismatch

Code looks like:

memset(new_list, 0, sizeof(new_list[0]) * new_length);
uint32_t count = 0;
for (int i = 0; i &lt; length_; i++) {   // L115
  LRUHandle* h = list_[i];
  while (h != NULL) {

length_ is a uint32_t. Probably just need to change i to uint32_t.

Add OpenBSD support

Original issue 31 created by jasper.lievisse.adriaanse on 2011-08-18T15:01:29.000Z:

Attached is a patch to allow leveldb to build (and pass regress tests) on OpenBSD. Tested succesfully on OpenBSD/amd64, 5.0.

Can't compile on OSX 10.5.8 + GCC 4.0.1

Original issue 10 created by voidptrptr on 2011-06-21T14:34:59.000Z:

What steps will reproduce the problem?

Can't compile on OSX

$ make
g++ -c -I. -I./include -DLEVELDB_PLATFORM_OSX -O2 -DNDEBUG db/table_cache.cc -o db/table_cache.o
g++ -c -I. -I./include -DLEVELDB_PLATFORM_OSX -O2 -DNDEBUG db/version_edit.cc -o db/version_edit.o
g++ -c -I. -I./include -DLEVELDB_PLATFORM_OSX -O2 -DNDEBUG db/version_set.cc -o db/version_set.o
db/version_set.cc: In member function ‘void leveldb::VersionSet::Builder::Apply(leveldb::VersionEdit_)’:
./db/version_edit.h:99: error: ‘std::vector<std::pair<int, leveldb::InternalKey>, std::allocator<std::pair<int, leveldb::InternalKey> > > leveldb::VersionEdit::compact_pointers_’ is private
db/version_set.cc:287: error: within this context
./db/version_edit.h:99: error: ‘std::vector<std::pair<int, leveldb::InternalKey>, std::allocator<std::pair<int, leveldb::InternalKey> > > leveldb::VersionEdit::compact_pointers_’ is private
db/version_set.cc:288: error: within this context
./db/version_edit.h:99: error: ‘std::vector<std::pair<int, leveldb::InternalKey>, std::allocator<std::pair<int, leveldb::InternalKey> > > leveldb::VersionEdit::compact_pointers_’ is private
db/version_set.cc:290: error: within this context
./db/version_edit.h:86: error: ‘typedef class std::set<std::pair<int, uint64_t>, std::less<std::pair<int, uint64_t> >, std::allocator<std::pair<int, uint64_t> > > leveldb::VersionEdit::DeletedFileSet’ is private
db/version_set.cc:294: error: within this context
./db/version_edit.h:100: error: ‘std::set<std::pair<int, uint64_t>, std::less<std::pair<int, uint64_t> >, std::allocator<std::pair<int, uint64_t> > > leveldb::VersionEdit::deleted_files_’ is private
db/version_set.cc:294: error: within this context
./db/version_edit.h:101: error: ‘std::vector<std::pair<int, leveldb::FileMetaData>, std::allocator<std::pair<int, leveldb::FileMetaData> > > leveldb::VersionEdit::new_files_’ is private
db/version_set.cc:304: error: within this context
./db/version_edit.h:101: error: ‘std::vector<std::pair<int, leveldb::FileMetaData>, std::allocator<std::pair<int, leveldb::FileMetaData> > > leveldb::VersionEdit::new_files_’ is private
db/version_set.cc:305: error: within this context
./db/version_edit.h:101: error: ‘std::vector<std::pair<int, leveldb::FileMetaData>, std::allocator<std::pair<int, leveldb::FileMetaData> > > leveldb::VersionEdit::new_files_’ is private
db/version_set.cc:306: error: within this context
db/version_set.cc: In member function ‘void leveldb::VersionSet::Builder::SaveTo(leveldb::Version_)’:
./db/version_set.h:66: error: ‘std::vector<leveldb::FileMetaData_, std::allocator<leveldb::FileMetaData_> > leveldb::Version::files_ [7]’ is private
db/version_set.cc:320: error: within this context
./db/version_set.h:66: error: ‘std::vector<leveldb::FileMetaData_, std::allocator<leveldb::FileMetaData_> > leveldb::Version::files_ [7]’ is private
db/version_set.cc:324: error: within this context
db/version_set.cc: In member function ‘void leveldb::VersionSet::Builder::MaybeAddFile(leveldb::Version_, int, leveldb::FileMetaData_)’:
./db/version_set.h:66: error: ‘std::vector<leveldb::FileMetaData_, std::allocator<leveldb::FileMetaData_> > leveldb::Version::files_ [7]’ is private
db/version_set.cc:367: error: within this context
make: *** [db/version_set.o] Error 1

What version of the product are you using? On what operating system?

from SVN

Please provide any additional information below.

gcc --version
i686-apple-darwin9-gcc-4.0.1 (GCC) 4.0.1 (Apple Inc. build 5488)
Copyright (C) 2005 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.

invoke pure virtual function error

Original issue 40 created by RealTanbro on 2011-09-13T10:23:12.000Z:

with cpy-leveldb
such an error occurred when exit

cpy-leveldb does so when close

leveldb_close(self-&gt;_db);
leveldb_options_destroy(self-&gt;_options);
leveldb_cache_destroy(self-&gt;_cache);
leveldb_env_destroy(self-&gt;_env);

leveldb_readoptions_destroy(self-&gt;_roptions);

dump is blow:

 0 0x00299410 in __kernel_vsyscall ()

 1 0x0099ddf0 in raise () from /lib/libc.so.6

 2 0x0099f701 in abort () from /lib/libc.so.6

 3 0x00449b10 in __gnu_cxx::__verbose_terminate_handler() () from /usr/lib/libstdc++.so.6

 4 0x00447515 in ?? () from /usr/lib/libstdc++.so.6

 5 0x00447552 in std::terminate() () from /usr/lib/libstdc++.so.6

 6 0x00447c75 in __cxa_pure_virtual () from /usr/lib/libstdc++.so.6

 7 0x0062e91f in leveldb::InternalKeyComparator::Compare(leveldb::Slice const&, leveldb::Slice const&) const ()

from /usr/local/lib/python2.7/site-packages/leveldb.so

 8 0x0063f779 in leveldb::(anonymous namespace)::MergingIterator::FindSmallest() () from /usr/local/lib/python2.7/site-packages/leveldb.so

 9 0x0064005c in leveldb::(anonymous namespace)::MergingIterator::Next() () from /usr/local/lib/python2.7/site-packages/leveldb.so

 10 0x0062898e in leveldb::DBImpl::DoCompactionWork(leveldb::DBImpl::CompactionState*) () from /usr/local/lib/python2.7/site-packages/leveldb.so

 11 0x00629241 in leveldb::DBImpl::BackgroundCompaction() () from /usr/local/lib/python2.7/site-packages/leveldb.so

 12 0x006297e8 in leveldb::DBImpl::BackgroundCall() () from /usr/local/lib/python2.7/site-packages/leveldb.so

 13 0x00645f92 in leveldb::(anonymous namespace)::PosixEnv::BGThreadWrapper(void*) () from /usr/local/lib/python2.7/site-packages/leveldb.so

 14 0x00af1832 in start_thread () from /lib/libpthread.so.0

 15 0x00a46e0e in clone () from /lib/libc.so.6

Allow ${CC} and ${OPT} to be overriden

Original issue 32 created by jasper.lievisse.adriaanse on 2011-08-18T15:13:07.000Z:

Attached is a patch to allow ${CC} and ${OPT} to be overriden. This helps packagers who may want to point CC to a different compiler than g++, or who want to append to OPT.

Provide a shared library

Original issue 27 created by quadrispro on 2011-08-09T12:57:55.000Z:

Please add a target into the Makefile to compile a shared library object.

Thanks in advance for any reply.

Assertion error in MaybeAddFile

Original issue 34 created by dsallings on 2011-08-22T20:07:15.000Z:

What steps will reproduce the problem?

I was running a test with a predefined set of 1,000,000 keys (form of {16 bits of 0}k%d) and a small fixed-size data in a large-ish application. I haven't isolated a test just yet.

What is the expected output? What do you see instead?

Expected works, got assertion:

Assertion failed: (vset_->icmp_.Compare((*files)[files->size()-1]->largest, f->smallest) < 0), function MaybeAddFile, file embedded/leveldb/db/version_set.cc, line 559.

With debug info:

overlapping ranges in same level \x00\x00k269287\x01^\x7fK 
\x00\x00\x00\x00 vs. \x00\x00k156819\x01\xf5\x1dX\x00\x00\x00\x00 

What version of the product are you using? On what operating system?

This is on OS X. I get this trunk just now.

Failure to gyp leveldb.gyp on mac

Original issue 8 created by kkowalczyk on 2011-06-07T04:17:41.000Z:

I tried to generate XCode project file with gyp (latest svn sources r932) but get this error:

kjkmacpro:leveldb kkowalczyk$ gyp
Traceback (most recent call last):
File "/usr/local/bin/gyp", line 18, in <module>
sys.exit(gyp.main(sys.argv[1:]))
File "/Library/Python/2.6/site-packages/gyp/init.py", line 448, in main
options.circular_check)
File "/Library/Python/2.6/site-packages/gyp/init.py", line 87, in Load
depth, generator_input_info, check, circular_check)
File "/Library/Python/2.6/site-packages/gyp/input.py", line 2224, in Load
depth, check)
File "/Library/Python/2.6/site-packages/gyp/input.py", line 379, in LoadTargetBuildFile
build_file_path)
File "/Library/Python/2.6/site-packages/gyp/input.py", line 998, in ProcessVariablesAndConditionsInDict
build_file)
File "/Library/Python/2.6/site-packages/gyp/input.py", line 1013, in ProcessVariablesAndConditionsInList
ProcessVariablesAndConditionsInDict(item, is_late, variables, build_file)
File "/Library/Python/2.6/site-packages/gyp/input.py", line 927, in ProcessVariablesAndConditionsInDict
expanded = ExpandVariables(value, is_late, variables, build_file)
File "/Library/Python/2.6/site-packages/gyp/input.py", line 697, in ExpandVariables
' in ' + build_file
KeyError: 'Undefined variable library in leveldb.gyp while trying to load leveldb.gyp'

I know next to nothing about gyp, but it seems like a problem with lveldb.gyp

I'm on Mac 10.6.7, leveldb is r30, python 2.6.1

The documentation's example for Comparator is buggy

Original issue 28 created by amirhkiani on 2011-08-10T07:37:42.000Z:

Hey!

I tried following the documentation for creating comparators and the line:

virtual const char* Name() { return "TwoPartComparator"; }

is wrong. It has to be:

virtual const char* Name() const { return "TwoPartComparator"; }

instead (with a const after Name()).

Thinking the documentation is perfect, I spent a long time trying to figure out what was wrong and figured at the end that the code was wrong to begin with. It's minor but I guess it would help fixing it.

Many thanks for open sourcing leveldb!
Amir

Better to disable copying of SequentialFile and RandomAccessFile

Original issue 21 created by giantchen on 2011-07-17T15:11:01.000Z:

Not a bug, but a coding style improvement. To prevent anyone from passing SequentialFile or RandomAccessFile by value to a function incidentally.

What version of the product are you using? On what operating system?

Revision 37. on Ubuntu 10.04 x86-64 with g++ 4.4.3

IO error: db//081444.sst: Too many open files

Original issue 45 created by GaryPYang on 2011-10-09T09:12:25.000Z:

Hi, all:

I use:

leveldb::Iterator* it = db->NewIterator(leveldb::ReadOptions());
for (it->SeekToFirst(); it->Valid(); it->Next()) {
...
}

to process a leveldb, but meet the following error:
"IO error: db//081444.sst: Too many open files"

but if I process a smaller leveldb everything is ok, i thought the problem happens cause the "*.sst" is too many to process, but can't find any ideas to solve this problem in leveldb by google.

Can anyone help me to resolve this problem? thanks a lot!

update include path listed in doc/index.html

Original issue 25 created by jehiah on 2011-07-30T04:37:44.000Z:

the code examples list "leveldb/include/file.h" but the source tree is include/leveldb/file.h; presumably this is an artifact from something else, and just "leveldb/file.h" would be how people typically include header files (this may change if a make install is added to the Makefile)

Snappy compression for OS X / iOS

Original issue 12 created by ashoemaker on 2011-06-22T09:11:44.000Z:

The attached patch mirrors the implementation of Snappy_Compress and Snappy_Uncompress from port/port_posix.h to port/port_osx.h, making compression available in OS X and iOS builds.

cstdatomic problem with Debian

Original issue 3 created by [email protected] on 2011-05-10T14:03:43.000Z:

SVN r27.

I am on Debian unstable (as of today) and its default GCC version is
4.6.0. Doing "make" results in:

./port/port_posix.h:14:22: fatal error: cstdatomic: No such file or
directory

It seems that cstdatomic was renamed to atomic in newer GCC versions.
Replacing cstdatomic include with atomic include in the header worked
for me.

Seek() semantics asymmetry

Original issue 47 created by Paolo.Losi on 2011-10-19T09:39:36.000Z:

As it is possible to:

Seek at first key in the source that at or past target

it should be possibile to:

Seek at first key in the source that at or before target

This would enable easier backward iterations.

I let you judge on the two options that I see:

  1. adding a back option to Seek
  2. providing SeekBack method.

Thanks!

add support to detect and to prevent multiple callers of the same UNIX process from simultaneously opening the same database

Original issue 49 created by josephwnorton on 2011-10-29T12:11:07.000Z:

Add support to detect and to prevent multiple callers of the same UNIX process from simultaneously opening the same database.

Excerpt from previous exchange on leveldb mailing list:


Sanjay Ghemawat
View profile
More options Sep 30, 1:04 am
On Thu, Sep 29, 2011 at 8:30 AM, Joseph Wayne Norton wrote:
> Hans -
> Thanks. Is is correct to assume that it is the caller's responsibility to
> ensure this does not happen?

leveldb guarantees that it will catch when two distinct processes
try to open the db concurrently. However it doesn't guarantee what happens
if the same process tries to do so and therefore it is the caller's
responsibility
to check for concurrent opens from the same process.
This is ugly, but the unix file locking primitives are very annoying in
this regard. I'll think about whether or not we should clean up the spec

by doing extra checks inside the leveldb implementation.

leveldb exports symbols

Original issue 1 created by [email protected] on 2011-04-20T00:00:16.000Z:

leveldb appears to export symbols. In Chrome, we don't want to export these symbols. In fact, we have a script that breaks the build whenever we export an unexpected symbol. Here's a patch that fixes the issue:

Index: util/env_chromium.cc
===================================================================
--- util/env_chromium.cc    (revision 21)
+++ util/env_chromium.cc    (working copy)
@@ -30,8 +30,9 @@
 #endif

 #if defined(OS_MACOSX) || defined(OS_WIN)
+namespace {
+
 // The following are glibc-specific
-extern &quot;C&quot; {
 size_t fread_unlocked(void *ptr, size_t size, size_t n, FILE *file) {
   return fread(ptr, size, n, file);
 }
@@ -51,6 +52,7 @@
   return fsync(fildes);
 #endif
 }
+
 }
 #endif

add support for PutNew/3 and WriteNew/2 update operations

Original issue 42 created by josephwnorton on 2011-09-27T14:32:32.000Z:

It would be handy to have 2 new update operations that fail if the key(s) already exist in the database.

For WriteNew operations, the last key/value of the batch is stored and not considered as already exists when a duplicate key(s) are given in the batch.

// Set the database entry for "key" to "value" only if "key" does not exist. Returns OK on success,
// and a non-OK status on error.
// Note: consider setting options.sync = true.
virtual Status PutNew(const WriteOptions& options,
const Slice& key,
const Slice& value) = 0;

// Apply the specified updates to the database only if all keys do not exist (prior to this operation).
// Returns OK on success, non-OK on failure.
// Note: consider setting options.sync = true.
virtual Status WriteNew(const WriteOptions& options, WriteBatch* updates) = 0;

Fix build for OS X.

Original issue 2 created by paul.joseph.davis on 2011-05-08T22:17:55.000Z:

LevelDB fails to build on OS X with standard GCC from XCode and so on and such forth.

This patch mixes up the Posix and Chromium port implementations as well as pulls in a couple Chromium headers so that building on OS X is possible without requiring that users build GCC 4.5.

Patch commit is at [1] and attached as a diff.

[1] https://github.com/davisp/leveldb/commit/50e280c9b1cfde0e255d124f38e1aa436d36ba52

add support for DeleteAllKeys update operation

Original issue 43 created by josephwnorton on 2011-10-01T05:55:50.000Z:

It would be handy to have a new update operation that can delete all keys atomically without having to close, to delete, and then to re-open the database.

Fix build on OS X and Linux

Original issue 15 created by ashoemaker on 2011-06-29T10:48:39.000Z:

The Makefile refers to PLATFORM_CCFLAGS instead of PLATFORM_CFLAGS. As a result, -DOS_{MACOSX,LINUX} are not defined and the build fails. The output of make on OS X (XCode 4):

g++ -c -I. -I./include -fno-builtin-memcmp -DLEVELDB_PLATFORM_POSIX -O2 -DNDEBUG -DSNAPPY db/version_set.cc -o db/version_set.o
In file included from ./port/port.h:14,
from ./util/coding.h:17,
from ./db/dbformat.h:13,
from ./db/version_set.h:21,
from db/version_set.cc:5:
./port/port_posix.h:20:22: error: endian.h: No such file or directory
make: *** [db/version_set.o] Error 1

On Linux (at least Ubuntu 11.04), the concatenation operator on shell variables is only supported in Bash ≥3.1 (http://tldp.org/LDP/abs/html/bashver3.html). This causes the following error in addition to the above:

./build_detect_platform: 55: PORT_CFLAGS+= -DLEVELDB_PLATFORM_POSIX: not found

The attached patch corrects the CFLAGS definition to use PLATFORM_CFLAGS, and avoids the use of the += operator (an alternative would be to run the script with bash instead of sh).

The patch contains two minor changes for the OS X build:

  • Removes port/port_osx.{h,cc}, which have been fully merged into port/port_posix.{h,cc}.
  • Removes -pthread and -lpthread, which are implicit in all OS X builds.

leveldb::DestroyDB returns failure status on Windows

Original issue 18 created by [email protected] on 2011-07-11T07:57:31.000Z:

What steps will reproduce the problem?

  1. Create a leveldb file on Windows.
  2. Call leveldb::DestroyDB for the database on Windows.

What is the expected output? What do you see instead?
The status code is expected to be OK.

What version of the product are you using? On what operating system?
Revision 36, coupled in Chromium 91993. Windows.

Please provide any additional information below.
On Windows, the line
> Status del = env->DeleteFile(dbname + "/" + filenames[i]);
looks failing to delete a file "LOCK". The LOCK file has been already opened at
> Status result = env->LockFile(LockFileName(dbname), &lock);

I guess Linux and Mac allows to "delete" an opened file (just deleting the filename), but Windows doesn't allow it.

The deletion actually works correctly since the LOCK file is finally deleted at
> env->DeleteFile(LockFileName(dbname));
But the return code indicates failure.

Latest build fails on db/log_test.cc on OS X

Original issue 6 created by paul.joseph.davis on 2011-05-23T23:03:17.000Z:

Assuming the patch from Issue # 2 is applied, trunk fails to build because of a type mismatch in db/log_test.cc. I've attached a trivial diff that fixes the issue.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.