Git Product home page Git Product logo

chronicle-core's People

Contributors

alamar avatar danielmitterdorfer avatar danielshaya avatar danielshrem avatar dpisklov avatar epickrram avatar glukos avatar hft-team-city avatar hpple avatar j4sm1ne96 avatar jansturenielsen avatar jatindersangha avatar jerryshea avatar lburgazzoli avatar leventov avatar martyix avatar maxim-ponomarev avatar michaelszymczak avatar minborg avatar nicktindall avatar nickward avatar peter-lawrey avatar pr0methean avatar robaustin avatar scottkidder avatar sergebg avatar sheinbergon avatar tgd avatar tomshercliff avatar yevgenp avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

chronicle-core's Issues

retryReadVolatileInt() spamming stdout

The System.out.println() call in UnsafeMemory.retryReadVolatileInt() produces an excessive amount of output on the console. This behaviour occurs when multiple appenders within the same process simultaneously write to the same Chronicle Queue file. (Each appender has its own instance of SingleChronicleQueue.)
Please could you remove or replace this System.out.println() call, and clarify how to avoid the situations where this condition is triggered.
Please could you also prepare a Chronicle Queue release that utilises the updated version of Chronicle Core that contains this correction.

Exception accessing non-existent field on JDK12

Changes introduced in JDK12 have broken net.openhft.chronicle.core.Jvm.
Tested on JDK12 EA build 12+33 on macOS.

See relevant stack trace snippet below:

java.lang.AssertionError: java.lang.NoSuchFieldException: override
	at net.openhft.chronicle.core.Jvm.setAccessible(Jvm.java:338)
	at net.openhft.chronicle.core.Jvm.getMethod0(Jvm.java:314)
	at net.openhft.chronicle.core.Jvm.getMethod(Jvm.java:308)
	at net.openhft.chronicle.core.OS.<clinit>(OS.java:78)
	at net.openhft.chronicle.map.ChronicleMapBuilder.<init>(ChronicleMapBuilder.java:206)
	at net.openhft.chronicle.map.ChronicleMapBuilder.of(ChronicleMapBuilder.java:280)
	at net.openhft.chronicle.map.ChronicleMap.of(ChronicleMap.java:71)

ObjectUtils should lookup readResolve() with getMethod()

This allows readResolve to be found as a default method, and allows this helper interface to be used with Chronicle Wire:

public interface MarshallableEnum<T extends Enum<T>> extends Marshallable
{
    default Object readResolve()
    {
        return Enum.valueOf((Class<T>) getClass(), name());
    }

    String name();
}

As things stand, the default readResolved is not used, and Marshallable Enums need to provide their own readResolve()

Java 10 - net.openhft.chronicle.core.Jvm: Unable to determine max direct memory

In Java 10 code

return (Long) method.invoke(null);

throws exception (ignored in code):

java.lang.IllegalAccessException: class ##### cannot access class jdk.internal.misc.VM (in module java.base) because module java.base does not export jdk.internal.misc to unnamed module @###
	at java.base/jdk.internal.reflect.Reflection.newIllegalAccessException(Reflection.java:360)
	at java.base/java.lang.reflect.AccessibleObject.checkAccess(AccessibleObject.java:589)
	at java.base/java.lang.reflect.Method.invoke(Method.java:556)

Console error message: net.openhft.chronicle.core.Jvm: Unable to determine max direct memory

JLBH - Support throughput < 1 operation / s

I'd like to use JLBH for benchmarking bulk operations which may take more than 1 second to complete. In my case they take around 5 seconds, so want to specify a target throughput of less than 1/5 or 0.2 operations / second. However, the signature JLBHOptions#throughput(int) effectively prohibits target throughput rates of less than 1 operation / s.

Are you open to support smaller target throughput rates? I could imagine an overloaded version with the signature JLBHOptions#throughput(int, TimeUnit) to specify the throughput per time unit and the default would be TimeUnit.SECONDS.

JVM crashes with SIGBUS + BUS_ADRERR

@peter-lawrey,

I ran into this JVM crash when running multiple JVMs (4 instances) in the same server:

Current thread (0x00007f6a60015800):  JavaThread "CommitLogFlushThread-0" daemon [_thread_in_Java, id=12927, stack(0x00007f6a1900e000,0x00007f6a1910f000)]

siginfo: si_signo: 7 (SIGBUS), si_code: 2 (BUS_ADRERR), si_addr: 0x00007f6a1e3aa000

(full log at: https://gist.github.com/telles-simbiose/58b91617eaadc82ae9c50279daab142c)

Right after the first crash, the other 3 instances also crashed, but doing different things in another thread

Current thread (0x00007f8134e46800):  JavaThread "BinaryDiffConsumerTask-0" [_thread_in_Java, id=12647, stack(0x00007f80dd6fe000,0x00007f80dd7ff000)]

siginfo: si_signo: 7 (SIGBUS), si_code: 2 (BUS_ADRERR), si_addr: 0x00007f80c93e3000

(full log at: https://gist.github.com/telles-simbiose/c6a96d01f2aa8b79924006acd8dc31a8, https://gist.github.com/telles-simbiose/9ba99522c7f1a127e065e40b62018393 and https://gist.github.com/telles-simbiose/f854c65996fa4bcf63535173dfd2a400)

I found out that BUS_ADRERR means that it accessed an invalid physical address and I am using addresses returned from Memory.allocate and OS.map.

Is there anyway for these methods to return an invalid address? Or for these addresses become invalid after some operation?

Thanks so much.

ClassLookup deprecation

The ClassLookup class has recently been marked as @deprecated, however there is no explanation or documentation on what to use as an alternative. One might think they could use ClassAliasPool directly, but it has no public constructor, so it is unclear how to migrate away from the deprecated class.

Spurious bad file descriptor when running load tests on c2 with -DfastJava8IO=true

TODO: more investigation, this is a placeholder for now.

java.io.IOException: Bad file descriptor
        at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
        at net.openhft.chronicle.core.OS.read0(OS.java:489)
        at net.openhft.chronicle.core.tcp.FastJ8SocketChannel.read0(FastJ8SocketChannel.java:84)
        at net.openhft.chronicle.core.tcp.FastJ8SocketChannel.read(FastJ8SocketChannel.java:30)
        at net.openhft.chronicle.network.TcpEventHandler.action(TcpEventHandler.java:198)
        at net.openhft.chronicle.threads.VanillaEventLoop.runAllMediumHandler(VanillaEventLoop.java:334)
        at net.openhft.chronicle.threads.VanillaEventLoop.runMediumLoopOnly(VanillaEventLoop.java:266)
        at net.openhft.chronicle.threads.VanillaEventLoop.runLoop(VanillaEventLoop.java:240)
        at net.openhft.chronicle.threads.VanillaEventLoop.run(VanillaEventLoop.java:221)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)

Add support for explicit and optional safepoints

Add

Jvm.safepoint();
// and
Jvm.optionalSafepoint();

to support adding safepoints explicitly.

Add test to ensure that a safepoint is performed, however it is very light weight. e.g. ~25 ns.

Creating a ChronicleMap or a ChronicleMapBuilder takes over 10 seconds

Creating a ChronicleMap or a ChronicleMapBuilder for the first time takes over 10 seconds. Here's the code:

ChronicleMapBuilder<String, String> builder =
        ChronicleMapBuilder.of(String.class, String.class)
        .averageKey("999234567")
        .averageValue("999234567")
        .entries(10);

ChronicleMap<String, String> coll = builder.create();

The 10 second delay happens on the first statement. I ran the code in the profiler. Here's what I found:

net.openhft.chronicle.core.OS.getProcessId0()   44.318596   5,076 ms (44.3%)    5,076 ms    1
net.openhft.chronicle.core.OS.getHostName0()    44.287758   5,072 ms (44.3%)    5,072 ms    1

Creating a second map or builder in the same program causes no delay at all.

I'm using chronicle-map 3.10.1 from maven on macOS 10.12. My Java version is Java(TM) SE Runtime Environment (build 1.8.0_91-b14)
Java HotSpot(TM) 64-Bit Server VM (build 25.91-b14, mixed mode)

IBM JDK Support

Are there any plans for this project to support the IBM JDK?

I've been testing the Apache Camel chronicle-engine component which uses various OpenHFT libraries. On IBM JDK I hit:

Caused by: java.lang.ClassNotFoundException java.lang.AbstractStringBuilder
	at net.openhft.chronicle.core.util.StringUtils.<clinit>(StringUtils.java:50)
	at net.openhft.chronicle.core.pool.StringBuilderPool.acquireStringBuilder(StringBuilderPool.java:32)
	at net.openhft.chronicle.wire.Wires.acquireStringBuilder(Wires.java:149)
	at net.openhft.chronicle.engine.map.remote.RemoteKVSSubscription.toUri(RemoteKVSSubscription.java:65)
	at net.openhft.chronicle.engine.map.remote.RemoteKVSSubscription.<init>(RemoteKVSSubscription.java:57)
	at net.openhft.chronicle.engine.tree.VanillaAsset$$Lambda$1385.0000000025193610.create(Unknown Source)
	at net.openhft.chronicle.engine.tree.VanillaAsset.createLeafView(VanillaAsset.java:350)
	at net.openhft.chronicle.engine.tree.VanillaAsset.createLeafView(VanillaAsset.java:353)
	at net.openhft.chronicle.engine.tree.VanillaAsset.createLeafView(VanillaAsset.java:353)
	at net.openhft.chronicle.engine.tree.VanillaAsset.lambda$acquireView$8(VanillaAsset.java:413)
	at net.openhft.chronicle.engine.tree.VanillaAsset$$Lambda$1395.0000000026B90ED0.call(Unknown Source)
	at net.openhft.chronicle.threads.Threads.withThreadGroup(Threads.java:42)
	at net.openhft.chronicle.engine.tree.VanillaAsset.acquireView(VanillaAsset.java:412)
	at net.openhft.chronicle.engine.api.tree.AssetTree.acquireSubscription(AssetTree.java:283)
	at net.openhft.chronicle.engine.api.tree.AssetTree.registerSubscriber(AssetTree.java:208)
	at org.apache.camel.component.chronicle.engine.ChronicleEngineConsumer.doStart(ChronicleEngineConsumer.java:58)

Add a createDirectories with better error messages

If you have a broken link directory;

  • File.mkdirs() return false without any error message.
  • Files.createDirectories(path) complains the FileAlreadyExistsException which is confusing.

If a directory is read only

  • File.mkdirs() return false without any error message.

If a file already exists with that name

  • File.mkdirs() return false without any error message.
  • Files.createDirectories(path) complains the FileAlreadyExistsException which is less confusing.

JLBH: [Feature] Use JLBH in xUnit types of tests

JLBH is a great and simple tool to measure latencies, partially thanks to easy to follow, human readable reports. Recently I tried to use it to use it as part of TDD process when developing components for a latency sensitive application and I realized that there is no API that can be used to create automated tests that can be run in a production-like, performance focused CI environment to avoid introducing latency regressions and to continuously increase performance of the existing app. The fully automated, xUnit style of testing would allow that. Example:

  @Test
  public void shouldRespondWithin50usForTheMostCommonTypeOfRequest() throws Exception {
    // given
    JLBHTask unitUnderTest = new TaskInvokingTheUnitYouWantToMeasure();
    JLBH jlbh = new JLBH(new JLBHOptions().jlbhTask(unitUnderTest) /* other options */);

    // when
    jlbh.start();

    // then
    Result result = ... // somehow retrieve the result
    assertTrue(result.getEndToEndLatencies().get99thPercentileInNs() <= 50_000)
  }

If we had such tool, it could become a great tool for not only measuring, but also test driving latency sensitive applications.

Typo in TwoLongValue::getValues

TwoLongValues::getValues returns a 2-element Long array containing value1 and value2.
Typo causing value2 to be returned in both slots.

Add support for unique micro second timestamps

This is useful for having unique ids which have an embedded timestamp. This is useful for

  • ids which are unique even if a service restarts.
  • ids are unique provided that are not created at a sustained rate of 1 million per second.

Java 9 compatibility

java.lang.IllegalAccessError: class net.openhft.chronicle.core.OS (in unnamed module @0x42d1c1ef) cannot access class sun.nio.ch.FileChannelImpl (in module java.base) because module java.base does not export sun.nio.ch to unnamed module @0x42d1c1ef
        at net.openhft.chronicle.core.OS.<clinit>(OS.java:67)

Java:

java version "9-ea"
Java(TM) SE Runtime Environment (build 9-ea+124)
Java HotSpot(TM) 64-Bit Server VM (build 9-ea+124, mixed mode)

ClassAliasPool.addAlias() causes name collisions and deserialization failure

The addAlias method uses Class.getSimpleName(), which is far from unique. Especially considering inner classes and enums which often use the same name (e.g. 'Type') and are supposed to be distinguished by the containing class name. This causes serialization/deserialization to fail due to a mixup between classes with the same simple name (e.g. said enums).

FWIW, as a workaround we use something like
cls.getName().replaceAll("(\\w)\\w*\\.", "$1").replace('$', '.') which has more chances of being unique, yet is still pretty short, along with the overloaded addAlias that takes a name per class. Maybe you can use a similar default instead of the class simple name.

Maths.ceilN limitation?

Hi,

I'm just testing our the Maths function to perform various levels of rounding and to see what the limitations are (if any) as I want to avoid the usage of BigDecimals.

I've come across a use case where I have a large number of decimals (64.09159469999999). When running this number through the Maths.ceil to 7 digits, I get returned the same number

Running the quick test below

public class TestCeiling {
	public static void main(String[] args) {
		for (int i = 0; i < 10; i++) {
			double roundedDouble = Maths.floorN(64.09159469999999, i);
			System.out.println("To precision " + i +" - roundedDouble = " + roundedDouble);
		}
	}
}

I get the following output

To precision 0 - roundedDouble = 64.0
To precision 1 - roundedDouble = 64.0
To precision 2 - roundedDouble = 64.09
To precision 3 - roundedDouble = 64.091
To precision 4 - roundedDouble = 64.0915
To precision 5 - roundedDouble = 64.09159
To precision 6 - roundedDouble = 64.091594
To precision 7 - roundedDouble = 64.09159469999999
To precision 8 - roundedDouble = 64.09159469999999
To precision 9 - roundedDouble = 64.09159469999999

Looking at the underlying function, I suspect this is due to the '+8' on the number of digits that I want to round to to get the factor.

Interestingly enough, when I run the 'false' portion of the calculation

Math.floor(Math.round(d * factor) / 1e8)

I get the expected result that I'm looking for. Is this something expected as a limitation of the library considering we're dealing with doubles here?

Thanks in advance!

JLBH - Latency vs Load

We were interested in using JLBH for testing how a features latencies scale while under loads (ie. 1, 2, 4, 8, etc threads of execution).

I don't see that JLBH current allows the number of threads to be ramped up. It seems like the code could be modified to run the test task concurrently.

I'm curious if JLBH by design didn't support higher loads for a reason? What is your take on this? If there isn't a fundamental issue with supporting higher loads, we may try and make the necessary change to support this feature. Any guidance for us before trying this out?

OS.map increasing RSS memory

Hi there,

My software is having memory leak issues, while investigating I came up with this test:

package br.com.s1mbi0se;

import net.openhft.chronicle.core.OS;
import org.junit.Test;

import java.io.File;
import java.io.IOException;
import java.io.RandomAccessFile;
import java.nio.channels.FileChannel;

public class TestChronicleCore {

    @Test
    public void test() throws IOException, InterruptedException {
        final File file = File.createTempFile("test_", ".data");
        file.deleteOnExit();

        final RandomAccessFile raf = new RandomAccessFile(file, "rw");
        raf.setLength(4 * 1024 * 1024 * 1024L);

        final long[] addresses = new long[1000000];
        final int mapSize = 4 * 1024;

        final FileChannel fileChannel = raf.getChannel();

        for (long i = 0; i < addresses.length; i++) {
            addresses[(int) i] = OS.map(fileChannel, FileChannel.MapMode.READ_WRITE, mapSize * i, mapSize);
        }

        for (long i = 0; i < addresses.length; i++) {
            OS.unmap(addresses[(int) i], mapSize);
        }

        Thread.sleep(60 * 1000);
    }
}

I ran this code with these JVM flags: -ea -Xms256M -Xmx256M -XX:+AlwaysPreTouch, so that all heap space is already committed and touched by JVM and any changes to memory usage must be something off-heap.

When the program reaches the last command (Thread.sleep), the process is using around 522M of resident memory (RSS), while I expected it to be around 300M (that's how much memory it uses if I don't map anything). Please note that I am unmapping eveything before it reaches this line.

Is this a bug, am I using it incorrectly or is this somehow expected?

Thanks in advance.

EDIT:
This is output of pmap -XX PID: https://gist.github.com/telles-simbiose/d546c61c19da7e7b8b76e5f534b8d6d2

I think the issue is related to that mapping on address d5580000, its size changes according to the variables I use on this test (it uses more memory if I map more times)

EDIT 2:
I made this C program that should have the same behaviour as the Java program: https://gist.github.com/telles-simbiose/da0c8fc66b57078e7ef7fab3a5482b82
But this program does not have any memory leaks

OS.java map() method limits Windows to 4GB.

chronicle.core.OS.java does not check for 64 Bit Windows OS.

public static long map(@NotNull FileChannel fileChannel, FileChannel.MapMode mode, long start, long size)
        throws IOException, IllegalArgumentException {
    if (isWindows() && size > 4L << 30)
        throw new IllegalArgumentException("Mapping more than 4096 MiB is unusable on Windows, size = " + (size >> 20) + " MiB");

Can't make a read operation on a full byte array

Is there any technical reason to not implement a byte array read operation?

As far as I can see, currently it's possible to make a direct byte array write operation, but not read the full byte array all at once.

System alignment requirements violated

Chronicle is writing Shorts to non-Short-aligned addresses causing the JVM to crash with a bus error. Most RISC architectures require larger-than-byte memory accesses to be size-aligned.

--- called from signal handler with signal 10 (SIGBUS) ---
ffffffff7d976a7c Unsafe_SetShort (ffffffff7de68658, 1001ec800, ffffffff7abfad60, 4d, 2020, ffffffff7de68648) + 168
ffffffff6a01191c * sun/misc/Unsafe.putShort(Ljava/lang/Object;JS)V+0
ffffffff6a0118c0 * sun/misc/Unsafe.putShort(Ljava/lang/Object;JS)V+0
ffffffff6a008068 * net/openhft/chronicle/core/UnsafeMemory.writeShort(Ljava/lang/Object;JS)V+36
ffffffff6a0080b4 * net/openhft/chronicle/bytes/HeapBytesStore.writeShort(JS)Lnet/openhft/chronicle/bytes/HeapBytesStore;+32
ffffffff6a007f58 * net/openhft/chronicle/bytes/HeapBytesStore.writeShort(JS)Lnet/openhft/chronicle/bytes/RandomDataOutput;+8

See hazelcast/hazelcast#5518 for a similar issue in a different project.

Check that boolean is either true or false.

To reduce typos, boolean values are checked to be in any case

true
t
yes
y

or

false
f
no
n

Empty String is ok if parsing a Boolean.
Anything else may produce a warning or an error.

Fixed reference counting issues

10:27:38.985 [main] INFO net.openhft.chronicle.queue.DirectoryUtils - Tmp dir: /Users/robaustin/git-projects/Chronicle-Queue/target/rollCycleStress-jia20sb6
Queue dir: /Users/robaustin/git-projects/Chronicle-Queue/target/rollCycleStress-jia20sb6 at 2018-06-11T09:27:38.985Z
Running test with 2 writers and 2 readers, sleep 50000ns
Writing 40000 messages with 50000ns interval
Should take ~1sec

java.lang.Throwable: 6e1ca577-main/reader-1 creation ref-count=1
	at net.openhft.chronicle.core.ReferenceCounter.newRefCountHistory(ReferenceCounter.java:45)
	at net.openhft.chronicle.core.ReferenceCounter.<init>(ReferenceCounter.java:35)
	at net.openhft.chronicle.core.ReferenceCounter.onReleased(ReferenceCounter.java:40)
	at net.openhft.chronicle.bytes.AbstractBytes.<init>(AbstractBytes.java:38)
	at net.openhft.chronicle.bytes.MappedBytes.<init>(MappedBytes.java:59)
	at net.openhft.chronicle.bytes.MappedBytes.<init>(MappedBytes.java:55)
	at net.openhft.chronicle.bytes.MappedBytes.mappedBytes(MappedBytes.java:113)
	at net.openhft.chronicle.queue.impl.single.SingleChronicleQueueStore.bytes(SingleChronicleQueueStore.java:358)
	at net.openhft.chronicle.queue.impl.single.SingleChronicleQueueExcerpts$StoreTailer.resetWires(SingleChronicleQueueExcerpts.java:1745)
	at net.openhft.chronicle.queue.impl.single.SingleChronicleQueueExcerpts$StoreTailer.cycle(SingleChronicleQueueExcerpts.java:1950)
	at net.openhft.chronicle.queue.impl.single.SingleChronicleQueueExcerpts$StoreTailer.nextIndexWithNextAvailableCycle0(SingleChronicleQueueExcerpts.java:1547)
	at net.openhft.chronicle.queue.impl.single.SingleChronicleQueueExcerpts$StoreTailer.nextIndexWithNextAvailableCycle(SingleChronicleQueueExcerpts.java:1508)
	at net.openhft.chronicle.queue.impl.single.SingleChronicleQueueExcerpts$StoreTailer.endOfCycle(SingleChronicleQueueExcerpts.java:1294)
	at net.openhft.chronicle.queue.impl.single.SingleChronicleQueueExcerpts$StoreTailer.next0(SingleChronicleQueueExcerpts.java:1269)
	at net.openhft.chronicle.queue.impl.single.SingleChronicleQueueExcerpts$StoreTailer.readingDocument(SingleChronicleQueueExcerpts.java:1197)
	at net.openhft.chronicle.queue.impl.single.SingleChronicleQueueExcerpts$StoreTailer.readingDocument(SingleChronicleQueueExcerpts.java:1129)
	at net.openhft.chronicle.queue.impl.single.RollCycleMultiThreadStressTest$Reader.call(RollCycleMultiThreadStressTest.java:270)
	at net.openhft.chronicle.queue.impl.single.RollCycleMultiThreadStressTest$Reader.call(RollCycleMultiThreadStressTest.java:237)
	at java.util.concurrent.FutureTask.run$$$capture(FutureTask.java:266)
	at java.util.concurrent.FutureTask.run(FutureTask.java)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
java.lang.Throwable: 6e1ca577-main/reader-1 Release ref-count=0
	at net.openhft.chronicle.core.ReferenceCounter.recordRelease(ReferenceCounter.java:88)
	at net.openhft.chronicle.core.ReferenceCounter.release(ReferenceCounter.java:79)
	at net.openhft.chronicle.bytes.AbstractBytes.release(AbstractBytes.java:469)
	at net.openhft.chronicle.queue.impl.single.SingleChronicleQueueExcerpts$StoreTailer.close(SingleChronicleQueueExcerpts.java:1135)
	at net.openhft.chronicle.queue.impl.single.SingleChronicleQueue.lambda$close$4(SingleChronicleQueue.java:537)
	at java.util.WeakHashMap.forEach(WeakHashMap.java:1025)
	at net.openhft.chronicle.queue.impl.single.SingleChronicleQueue.close(SingleChronicleQueue.java:537)
	at net.openhft.chronicle.queue.impl.single.RollCycleMultiThreadStressTest$Reader.call(RollCycleMultiThreadStressTest.java:302)
	at net.openhft.chronicle.queue.impl.single.RollCycleMultiThreadStressTest$Reader.call(RollCycleMultiThreadStressTest.java:237)
	at java.util.concurrent.FutureTask.run$$$capture(FutureTask.java:266)
	at java.util.concurrent.FutureTask.run(FutureTask.java)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
java.lang.Throwable: 6e1ca577-main/queue-thread-local-cleaner-daemon Release ref-count=0
	at net.openhft.chronicle.core.ReferenceCounter.recordRelease(ReferenceCounter.java:88)
	at net.openhft.chronicle.core.ReferenceCounter.release(ReferenceCounter.java:74)
	at net.openhft.chronicle.bytes.AbstractBytes.release(AbstractBytes.java:469)
	at net.openhft.chronicle.queue.impl.single.StoreComponentReferenceHandler.processWireQueue(StoreComponentReferenceHandler.java:74)
	at net.openhft.chronicle.queue.impl.single.StoreComponentReferenceHandler.lambda$static$0(StoreComponentReferenceHandler.java:43)
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
	at java.util.concurrent.FutureTask.run$$$capture(FutureTask.java:266)
	at java.util.concurrent.FutureTask.run(FutureTask.java)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
java.lang.Throwable: 21bb9e0f-main/reader creation ref-count=1
	at net.openhft.chronicle.core.ReferenceCounter.newRefCountHistory(ReferenceCounter.java:45)
	at net.openhft.chronicle.core.ReferenceCounter.<init>(ReferenceCounter.java:35)
	at net.openhft.chronicle.core.ReferenceCounter.onReleased(ReferenceCounter.java:40)
	at net.openhft.chronicle.bytes.AbstractBytes.<init>(AbstractBytes.java:38)
	at net.openhft.chronicle.bytes.MappedBytes.<init>(MappedBytes.java:59)
	at net.openhft.chronicle.bytes.MappedBytes.<init>(MappedBytes.java:55)
	at net.openhft.chronicle.bytes.MappedBytes.mappedBytes(MappedBytes.java:113)
	at net.openhft.chronicle.queue.impl.single.SingleChronicleQueueStore.bytes(SingleChronicleQueueStore.java:358)
	at net.openhft.chronicle.queue.impl.single.SingleChronicleQueueExcerpts$StoreTailer.resetWires(SingleChronicleQueueExcerpts.java:1745)
	at net.openhft.chronicle.queue.impl.single.SingleChronicleQueueExcerpts$StoreTailer.cycle(SingleChronicleQueueExcerpts.java:1950)
	at net.openhft.chronicle.queue.impl.single.SingleChronicleQueueExcerpts$StoreTailer.nextIndexWithNextAvailableCycle0(SingleChronicleQueueExcerpts.java:1547)
	at net.openhft.chronicle.queue.impl.single.SingleChronicleQueueExcerpts$StoreTailer.nextIndexWithNextAvailableCycle(SingleChronicleQueueExcerpts.java:1508)
	at net.openhft.chronicle.queue.impl.single.SingleChronicleQueueExcerpts$StoreTailer.endOfCycle(SingleChronicleQueueExcerpts.java:1294)
	at net.openhft.chronicle.queue.impl.single.SingleChronicleQueueExcerpts$StoreTailer.next0(SingleChronicleQueueExcerpts.java:1269)
	at net.openhft.chronicle.queue.impl.single.SingleChronicleQueueExcerpts$StoreTailer.readingDocument(SingleChronicleQueueExcerpts.java:1197)
	at net.openhft.chronicle.queue.impl.single.SingleChronicleQueueExcerpts$StoreTailer.readingDocument(SingleChronicleQueueExcerpts.java:1129)
	at net.openhft.chronicle.queue.impl.single.RollCycleMultiThreadStressTest$Reader.call(RollCycleMultiThreadStressTest.java:270)
	at net.openhft.chronicle.queue.impl.single.RollCycleMultiThreadStressTest$Reader.call(RollCycleMultiThreadStressTest.java:237)
	at java.util.concurrent.FutureTask.run$$$capture(FutureTask.java:266)
	at java.util.concurrent.FutureTask.run(FutureTask.java)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
java.lang.Throwable: 21bb9e0f-main/reader Release ref-count=0
	at net.openhft.chronicle.core.ReferenceCounter.recordRelease(ReferenceCounter.java:88)
	at net.openhft.chronicle.core.ReferenceCounter.release(ReferenceCounter.java:79)
	at net.openhft.chronicle.bytes.AbstractBytes.release(AbstractBytes.java:469)
	at net.openhft.chronicle.queue.impl.single.SingleChronicleQueueExcerpts$StoreTailer.close(SingleChronicleQueueExcerpts.java:1135)
	at net.openhft.chronicle.queue.impl.single.SingleChronicleQueue.lambda$close$4(SingleChronicleQueue.java:537)
	at java.util.WeakHashMap.forEach(WeakHashMap.java:1025)
	at net.openhft.chronicle.queue.impl.single.SingleChronicleQueue.close(SingleChronicleQueue.java:537)
	at net.openhft.chronicle.queue.impl.single.RollCycleMultiThreadStressTest$Reader.call(RollCycleMultiThreadStressTest.java:302)
	at net.openhft.chronicle.queue.impl.single.RollCycleMultiThreadStressTest$Reader.call(RollCycleMultiThreadStressTest.java:237)
	at java.util.concurrent.FutureTask.run$$$capture(FutureTask.java:266)
	at java.util.concurrent.FutureTask.run(FutureTask.java)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
java.lang.Throwable: 21bb9e0f-main/queue-thread-local-cleaner-daemon Release ref-count=0
	at net.openhft.chronicle.core.ReferenceCounter.recordRelease(ReferenceCounter.java:88)
	at net.openhft.chronicle.core.ReferenceCounter.release(ReferenceCounter.java:74)
	at net.openhft.chronicle.bytes.AbstractBytes.release(AbstractBytes.java:469)
	at net.openhft.chronicle.queue.impl.single.StoreComponentReferenceHandler.processWireQueue(StoreComponentReferenceHandler.java:74)
	at net.openhft.chronicle.queue.impl.single.StoreComponentReferenceHandler.lambda$static$0(StoreComponentReferenceHandler.java:43)
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
	at java.util.concurrent.FutureTask.run$$$capture(FutureTask.java:266)
	at java.util.concurrent.FutureTask.run(FutureTask.java)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
10:27:40.082 [main] INFO net.openhft.chronicle.queue.DirectoryUtils - Tmp dir: /Users/robaustin/git-projects/Chronicle-Queue/target/rollCycleStress-jia20sb7
All messages written in 1secs at rate of 36,563/sec 18,282/sec per writer (actual writeLatency 54,700ns)

Test complete

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.