Git Product home page Git Product logo

chronicle-logger's Introduction

Chronicle Overview

Chronicle Software provides libraries to help with fast data. The majority of our customers are in financial services. Our products include:

Chronicle FIX/ITCH Engine - Low latency FIX/ITCH engine in Java for all versions of FIX. Can parse and generate messages within 1 microsecond.

Chronicle Microservices Framework - Microservices built with Chronicle Services are efficient, easy to build, test, and maintain. Equally importantly they provide exceptional high-throughput, low latency, and transparent HA/DR.

Chronicle Matching Engine - forms the backbone for a resilient and scalable exchange solution. It provides order matching, validation, and risk checks with high capacity and low latency. It has a modular and flexible design which enables it to be used stand-alone, or seamlessly integrated with Chronicle FIX and Chronicle Services.

Chronicle EFX - built on Chronicle Microservices, EFX contains components for Aggregation, Pricing, Hedging, Position Keeping, P&L, Market Gateway and Algo containers. EFX allows the customer to use off the shelf functionality built and maintained by Chronicle, or to extend and customise with their own algos and IP - the best compromise of "buy vs build".

Chronicle Queue and also Chronicle Queue Enterprise - using Chronicle Queue for low latency message passing provides an effectively unlimited buffer between producers and consumers and a complete audit trail of every message sent. Queue Enterprise provides even lower latencies and additional delivery semantics - for example - only process a message once it is guaranteed replicated to another host(s).

Chronicle Map is a key-value store sharing persisted memory between processes, either on the same server or across networks. CM is designed to store the data off-heap, which means it minimizes the heap usage and garbage collection allowing the data to be stored with sub-microsecond latency. CM is structured key-value store able to support exceptionally high updates and high throughput data e.g. OPRA Market Data with minimum configuration. Replication is provided by Chronicle Map Enterprise

Contributor agreement

For us to accept contributions to open source libraries we require contributors to sign the below

Documentation in this repo

This repo contains the following docs

  • Java Version Support documents which versions of Java/JVM are supported by Chronicle libraries

  • Platform Support documents which Operating Systems are supported by Chronicle libraries

  • Version Support explains Chronicle’s version numbers and release timetable

  • Anatomy shows a graphical representation of the OpenHFT projects and their dependencies

  • Reducing Garbage contains tips and tricks to reduce garbage

chronicle-logger's People

Contributors

dependabot[bot] avatar dpisklov avatar emmachronicle avatar epickrram avatar feutche avatar hft-team-city avatar jansturenielsen avatar jerryshea avatar lburgazzoli avatar leventov avatar martyix avatar minborg avatar mvfranz avatar nicktindall avatar peter-lawrey avatar peter-lawrey-admin avatar robaustin avatar sullis avatar tomshercliff avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

chronicle-logger's Issues

Chronicle-Logger rollover to compressed file(gzip)

Hi,

I am using chronicle-logger-log4j-2 to log the messages and I need to rollover .cq4 files every 1000MB and compress them same as File logger. But I don't find the configuration to achieve this.

Is there a way to configure Chronicle logger to achieve this?

Text chronicles with Logback result in a NullPointerException

I have a project using Logback where the binary appenders work, but the text appenders throw an NPE when they try to doAppend. The stacktrace follows:

Caused by: java.lang.NullPointerException
        at net.openhft.chronicle.logger.logback.TextChronicleAppender.doAppend(TextChronicleAppender.java:74)
        at net.openhft.chronicle.logger.logback.TextIndexedChronicleAppender.doAppend(TextIndexedChronicleAppender.java:70)
        at net.openhft.chronicle.logger.logback.TextIndexedChronicleAppender.doAppend(TextIndexedChronicleAppender.java:30)
        at ch.qos.logback.core.spi.AppenderAttachableImpl.appendLoopOnAppenders(AppenderAttachableImpl.java:48)
        at ch.qos.logback.classic.Logger.appendLoopOnAppenders(Logger.java:273)
        at ch.qos.logback.classic.Logger.callAppenders(Logger.java:260)
        at ch.qos.logback.classic.Logger.buildLoggingEventAndAppend(Logger.java:442)
        at ch.qos.logback.classic.Logger.filterAndLog_0_Or3Plus(Logger.java:396)
        at ch.qos.logback.classic.Logger.log(Logger.java:788)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:483)
        at org.jboss.logging.Slf4jLocationAwareLogger.doLog(Slf4jLocationAwareLogger.java:89)
        at org.jboss.logging.Slf4jLocationAwareLogger.doLogf(Slf4jLocationAwareLogger.java:82)
        at org.jboss.logging.Logger.logf(Logger.java:2096)
        at org.xnio._private.Messages_$logger.greeting(Messages_$logger.java:652)
        at org.xnio.Xnio.<clinit>(Xnio.java:92)

An initial guess is that the timeStampFormatter is still null.

My logback config looks like this:

<!-- Logback configuration. See http://logback.qos.ch/manual/index.html -->
<!-- <configuration scan="true" scanPeriod="10 seconds"> -->
<configuration scan="false">

  <!-- Trying Chronicle file output -->
  <appender name="CHRONICLE" class="net.openhft.chronicle.logger.logback.BinaryVanillaChronicleAppender">
    <path>logs/chronicle</path>
    <!-- <includeCallerData>true</includeCallerData> -->
    <!-- <includeMappedDiagnosticContext>true</includeMappedDiagnosticContext> -->

    <!--
    Configure the underlying VanillaChronicle, for a list of the options have
    a look at net.openhft.chronicle.VanillaChronicleConfig
    -->
    <!-- <chronicleConfig>
        <dataCacheCapacity>128</dataCacheCapacity>
    </chronicleConfig> -->
  </appender>

  <!-- Simple file output -->
  <appender name="FILE" class="ch.qos.logback.core.rolling.RollingFileAppender">
    <!-- encoder defaults to ch.qos.logback.classic.encoder.PatternLayoutEncoder -->
    <encoder>
      <pattern>%d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n</pattern>
    </encoder>

    <rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
      <!-- rollover daily -->
      <fileNamePattern>logs/niotooling-%d{yyyy-MM-dd}.%i.log</fileNamePattern>
      <timeBasedFileNamingAndTriggeringPolicy
          class="ch.qos.logback.core.rolling.SizeAndTimeBasedFNATP">
        <!-- or whenever the file size reaches 64 MB -->
        <maxFileSize>64 MB</maxFileSize>
      </timeBasedFileNamingAndTriggeringPolicy>
    </rollingPolicy>

    <!-- Safely log to the same file from multiple JVMs. Degrades performance! -->
    <prudent>true</prudent>
  </appender>


  <!-- Console output -->
  <appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
    <!-- encoder defaults to ch.qos.logback.classic.encoder.PatternLayoutEncoder -->
    <encoder>
      <pattern>%-5level %logger{36} - %msg%n</pattern>
    </encoder>
    <!-- Only log level INFO and above -->
    <filter class="ch.qos.logback.classic.filter.ThresholdFilter">
      <level>INFO</level>
    </filter>
  </appender>


  <!-- Enable FILE and STDOUT appenders for all log messages.
       By default, only log at level INFO and above. -->
  <root level="INFO">
    <appender-ref ref="CHRONICLE" />
    <appender-ref ref="FILE" />
    <appender-ref ref="STDOUT" />
  </root>

  <!-- For loggers in the these namespaces, log at all levels. -->
  <logger name="user" level="ALL" />
  <!-- To log pedestal internals, enable this and change ThresholdFilter to DEBUG
    <logger name="io.pedestal" level="ALL" />
  -->

</configuration>

Bug when logging exception with the chronicle logback appender

Hello
When I use the logback appender to log simple messages it works fine.
However if I try log a message and an exception, I get an IndexOutOfBoundsException exception.
Below the whole error message:

java.lang.IndexOutOfBoundsException: position is beyond the end of the buffer 133 > -280165940
at net.openhft.lang.io.NativeBytes.checkEndOfBuffer(NativeBytes.java:522)
at net.openhft.lang.io.AbstractBytes.writeObject(AbstractBytes.java:1921)
at net.openhft.chronicle.logger.logback.BinaryChronicleAppender.doAppend(BinaryChronicleAppender.java:106)
at net.openhft.chronicle.logger.logback.BinaryChronicleAppender.doAppend(BinaryChronicleAppender.java:27)
at ch.qos.logback.core.spi.AppenderAttachableImpl.appendLoopOnAppenders(AppenderAttachableImpl.java:48)
at ch.qos.logback.classic.Logger.appendLoopOnAppenders(Logger.java:273)
...

I have tried using a windows system.
Let me know if you need more details
Regards

Alexandre

Serialization of exceptions

Hello
This ticket is related to the comments of former ticket 15
When a binary appender is used, an exception is serialized with serializer "JDKZObjectSerializer". This implies that you cannot read logs if you do not have in your classpath the exception classes that have been used by the application that has generated the logs.
This forbid to use ChroniTail, ChroniCat and the other tools working as standalone tools...
Also this almost forbid applications to refactor their exception classes.
Instead of serializing the exception itself, I think that the logger should serialize an object containing two strings:

  1. the exception message
  2. the exception stack trace as a string

Log4j2 and invalid attributes

Hi,

when I define log4j2 logger as readme suggests:

        <Chronicle name="TRACE_CQ">
            <path>data/traces</path>
            <includeCallerData>true</includeCallerData>
            <includeMappedDiagnosticContext>false</includeMappedDiagnosticContext>
            <chronicleCfg>
                <blockSize>65536</blockSize>
                <bufferCapacity>65536</bufferCapacity>
            </chronicleCfg>
        </Chronicle>

...

        <Logger name="com.project" level="trace" includeLocation="true">
            <AppenderRef ref="TRACE_CQ" level="trace" />
        </Logger>

I get the following error:

2017-11-07 09:38:56,863 main ERROR Chronicle contains invalid attributes "includeCallerData", "includeMappedDiagnosticContext"

So I guess the readme (https://github.com/OpenHFT/Chronicle-Logger#chronicle-logger-log4j-2) is incorrect and those two attributes includeCallerData, includeMappedDiagnosticContext should not be present.

Am I right?

Best regards,
Martin

OWASP giving security error on Chronicle-Logger

hi i tried owasp security on your library but i got error as shown below:

MAVEN DEPENDENCY

org.owasp
dependency-check-maven
6.5.3

true
true

ERROR:

One or more dependencies were identified with known vulnerabilities in LabiysWebService:
chronicle-wire-2.22ea11.jar (pkg:maven/net.openhft/[email protected], cpe:2.3:a:wire:wire:2.22.ea11:::::::*) : CVE-2018-8909, CVE-2020-15258, CVE-2020-27853, CVE-2021-21301, CVE-2021-32665, CVE-2021-32666, CVE-2021-32755,
CVE-2021-41093`

kotlin-stdlib-1.4.10.jar (pkg:maven/org.jetbrains.kotlin/[email protected], cpe:2.3:a:jetbrains:kotlin:1.4.10:*:*:*:*:*:*:*) : CVE-2020-29582
kotlin-stdlib-common-1.4.0.jar (pkg:maven/org.jetbrains.kotlin/[email protected], cpe:2.3:a:jetbrains:kotlin:1.4.0:*:*:*:*:*:*:*) : CVE-2020-15824, CVE-2020-29582
log4j-slf4j-impl-2.17.0.jar (pkg:maven/org.apache.logging.log4j/[email protected], cpe:2.3:a:apache:log4j:2.17.0:*:*:*:*:*:*:*) : CVE-2021-44832

See the dependency-check report for more details.
`

How to append extended information to logger

I wish to include traceid/spanid which are injected from opentelemetry instrumentation to logs. The console Appender can be configured as follows to do this:

<Console name="Console" target="SYSTEM_OUT">
            <PatternLayout pattern="%d{HH:mm:ss.SSS} [%t] %-5level %logger{36} traceId: %X{trace_id} spanId: %X{span_id} - %msg%n" />
        </Console>

And it outputs something like this:

15:19:24.154 [main] DEBUG MyApp traceId: b19e837e754e1cae22355f807f65c05d spanId: a82dd9d711431773 - Entering application.

How can I achieve this with chronicle logger? I do not wish to change existing log statements as some of the required logs are coming from third party libraries.

Simple Example Setup

Would you mind providing a simple example project with pom + a main(string arg[]) doing one single log statement (preferable with sl4j). Also 1-5 sentences on how this interacts with the mapped logging libraries (e.g. file rolling/zipping, log format) and their configuration.

kind regards,
Rüdiger

ChronicleLogReader fails while running processLog()

Hello again.
I wrote simple log with logback+chronicleAppender and read it with ChroniCat. I tried simple way found on chronicle-logger github page. Exception occured while running ChronicleLogReadr::processLog() trying to process "throwable" field which not exists in my case. I temporarily solved this problem by writing my custom ChronicleLogReader omitting "throwable" part.

Below is an exception message

java.lang.IllegalArgumentException: primitive: void
	at net.openhft.chronicle.core.util.ObjectUtils.lambda$static$3(ObjectUtils.java:75)
	at net.openhft.chronicle.core.ClassLocal.computeValue(ClassLocal.java:53)
	at java.lang.ClassValue.getFromHashMap(ClassValue.java:227)
	at java.lang.ClassValue.getFromBackup(ClassValue.java:209)
	at java.lang.ClassValue.get(ClassValue.java:115)
	at net.openhft.chronicle.core.util.ObjectUtils.newInstance(ObjectUtils.java:348)
	at net.openhft.chronicle.wire.WireInternal.throwable(WireInternal.java:208)
	at net.openhft.chronicle.wire.ValueIn.throwable(ValueIn.java:552)
	at net.openhft.chronicle.logger.tools.ChronicleLogReader.processLogs(ChronicleLogReader.java:122)
	at net.openhft.chronicle.logger.tools.ChroniCat.main(ChroniCat.java:41)

Below is my logback setting

<appender name="CHRONICLE_FILE" class="net.openhft.chronicle.logger.logback.ChronicleAppender">
		<path>${USER_HOME}/ap/chronicle</path>
		<chronicleConfig>
		  <blockSize>128</blockSize>
		</chronicleConfig>
	</appender>

Below is the code that I used to write log.

for (int i = 0; i < iter; ++i) {
            DefaultLogger.logger.info("default value long={} double={} str={}", longs[i], doubles[i], strs[i]);
        }

Chronicle-Logger time-based rolling and compression

Chronicle-Logger rollover to compressed file(gzip)

If it supports time-based rolling, can we rollover every hour and how it can be compressed like filelogger? Below by xml configuration,

 <Properties>
    <Property name="name">chronicle-queue</Property>
    <Property name="logPath">logs/chronicle-log4j2/</Property>
    <Property name="pattern">[%-5level] %d{yyyy-MM-dd HH:mm:ss.SSS} [%t] %c{1} - %msg%n</Property>
</Properties>    

<Appenders>
    <Chronicle name="CHRONICLE">
        <path>${logPath}chronicle</path>
    </Chronicle>

    <RollingFile name="File" fileName="${logPath}${name}.log"
                 filePattern="${logPath}/${name}-%d{yyyy-MM-dd}-%i.log.gz">
        <PatternLayout pattern="${pattern}"/>
        <Policies>
            <TimeBasedTriggeringPolicy interval="3600" modulate="true"/>
            <SizeBasedTriggeringPolicy size="100MB"/>
        </Policies>
        <DefaultRolloverStrategy max="10"/>
    </RollingFile>
</Appenders>

Bump log4j2 version

Hello.

The version of log4j2 you're building against is 2.8.x, which is rather old and under-performing (twice as slow using similar setups).

If you could please update the log4j2 version to 2.10 ( Spring 2.0.x compatibility here is a major plus). Versions are binary/code wise incompatible, so it's not just a matter of dependency exclusion/override.

10x

Is Chronicle Logger compatible with Chronicle Queue v4?

Hi,

I'm trying to setup Chronicle Logger 1.1.1:

  1. build.gradle:
compile group: 'net.openhft', name: 'chronicle-logger-logback', version: '1.1.1'
  1. logback.xml:
    <appender name  = "PERFORMANCE_FILE_CQ4"
              class = "net.openhft.chronicle.logger.logback.TextVanillaChronicleAppender">
        <path>${dataFolder:-data}/performance</path>
        <formatMessage>false</formatMessage>
        <includeCallerData>false</includeCallerData>
        <includeMappedDiagnosticContext>false</includeMappedDiagnosticContext>
        <chronicleConfig>
            <dataCacheCapacity>1</dataCacheCapacity>
        </chronicleConfig>
    </appender>

and Logback really creates data/performance folder. However, it creates new and new files like this:

image

I'm not really sure what happens. My expectation was that one file would be created.

Thank you for any help

Regards,
Martin

Example of Logs aggregation

I have several machines with 3 apps running on each. Each app is writing logs on local machine. I want to make all apps to write on a remote machine to a common storage. How can I configure Chronicle Logger to do that?

Configuration bug when using logback along with chronicle-logger

Hello
When I use logback.xml to configure the appender settings of class "net.openhft.chronicle.logger.logback.BinaryVanillaChronicleAppender", there is an issue that property dataBlockSize can not be recognized by logback while indexBlockSize works. I tracked down the source code and found out that when logback is building Bean information from "Introspector.getBeanInfo(obj.getClass())", only the readMethod of dataBlockSize exists and the writeMethod can not be found. By this, an error message will be thrown out:

ERROR in ch.qos.logback.core.joran.spi.Interpreter@28:19 - no applicable action for [dataBlockSize], current ElementPath is [[configuration][appender][chronicleConfig][dataBlockSize]]
I think the root cause is the type error of dataBlockSize in the class VanillaLogAppenderConfig

public long getDataBlockSize() {
    return this.builder.dataBlockSize();
}

//TODO: long vs int
public void setDataBlockSize(int dataBlockSize) {
    this.builder.dataBlockSize(dataBlockSize);
}


The getter and setter should have the same type of variable in order to make Bean construction method work correctly.

Regards,
Mark

Chronicle-Logger doesn't work on Solaris/SPARC due to unsafe memory access

I am trying to build the latest snapshot of Chronicle-Logger on Solaris 11/SPARC architecture and JDK 14, but the tests are hitting the following problem:
[ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 1.034 s <<< FAILURE! - in net.openhft.chronicle.logger.DefaultChronicleLogWriterTest [ERROR] net.openhft.chronicle.logger.DefaultChronicleLogWriterTest.testWrite Time elapsed: 0.825 s <<< ERROR! java.lang.InternalError: a fault occurred in a recent unsafe memory access operation in compiled Java code at net.openhft.chronicle.wire.BinaryWire$FixedBinaryValueOut.marshallable(BinaryWire.java:2006) at net.openhft.chronicle.wire.ValueOut.typedMarshallable(ValueOut.java:623) at net.openhft.chronicle.wire.ValueOut.object(ValueOut.java:937) at net.openhft.chronicle.queue.impl.table.SingleTableBuilder.writeTableStore(SingleTableBuilder.java:175) at net.openhft.chronicle.queue.impl.table.SingleTableBuilder.lambda$build$2(SingleTableBuilder.java:140) at net.openhft.chronicle.queue.impl.table.SingleTableStore.doWithLock(SingleTableStore.java:141) at net.openhft.chronicle.queue.impl.table.SingleTableStore.doWithExclusiveLock(SingleTableStore.java:124) at net.openhft.chronicle.queue.impl.table.SingleTableBuilder.build(SingleTableBuilder.java:137) at net.openhft.chronicle.queue.impl.single.SingleChronicleQueueBuilder.initializeMetadata(SingleChronicleQueueBuilder.java:452) at net.openhft.chronicle.queue.impl.single.SingleChronicleQueueBuilder.preBuild(SingleChronicleQueueBuilder.java:1091) at net.openhft.chronicle.queue.impl.single.SingleChronicleQueueBuilder.build(SingleChronicleQueueBuilder.java:325) at net.openhft.chronicle.logger.DefaultChronicleLogWriterTest.testWrite(DefaultChronicleLogWriterTest.java:58) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:564) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.ParentRunner.run(ParentRunner.java:413) at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:377) at org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:284) at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:248) at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:167) at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:456) at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:169) at org.apache.maven.surefire.booter.ForkedBooter.run(ForkedBooter.java:595) at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:581)

that's typically an issue of memory alignment that is strictly enforced by SPARC CPUs.

Please let me know if you need more information or if I can help with access to a SPARC machine.

working example

is there a complete working example of the logger... simple hello world. I get a ChronicleLoggerManager uninitialized

Compression

Hi,

Log4j2 provides for its appenders possibility to compress log files after roll overs. Does Chronicle Queue support anything similar? Can I emulate it somehow?

Thanks

Regards,
Martin

Allow programmatic logger configuration in slf4j implementation

The current StaticLoggerBinder calls a no-args constructor that always forces setting the system property, and this system property has to point to a file and not a location in CLASSPATH or something generated on the fly. I wrote a workaround that creates a temp file and writes the config into the file, then sets the system property -- but this is a bit ugly.

Proposal: allow a secondary system property to set a custom logger configurator or maybe even a public static setter of some sort that could be set up in advance, pre-log initialization. This is not ideal either but given SLF4J's insistence on static initialization it's probably the best option possible. Also: allow trying load from CLASSPATH as well as load from File in the standard mechanism.

If you prefer a pull request let me know.

logging an arbitrary object using Chronicle logger throws IllegalStateException

When attempting to use the slf4j facade with Chronicle logger we're running into an IllegalStateException which can be traced to ValueOut.java in chronicle wire.

        throw new IllegalStateException("type=" + value.getClass() +
                " is unsupported, it must either be of type Marshallable, String or " +
                "AutoBoxed primitive Object");

Slf4j (and log4j/logback) don't have any restriction on the object type that gets logged. In absence of known serializability those libraries seem to just invoke .toString(). Unfortunately the code that logs the arbitrary objects is a library code that we don't have control over. Is there a way to configure Chronicle logger to support logging of arbitrary objects using toString?

Infinite loop when mysql-connector-java or netty is on the classpath

An infinite loop occurs when trying to use Chronicle-Logger when mysql-connector-java is on the classpath and also netty.

[main] WARN net.openhft.chronicle.wire.WireMarshaller - Found this$0, in class com.mysql.cj.PerConnectionLRUFactory$PerConnectionLRU which will be ignored!
java.lang.StackOverflowError
	at net.openhft.chronicle.bytes.NativeBytesStore.addressForWrite(NativeBytesStore.java:544)
	at net.openhft.chronicle.bytes.AbstractBytes.addressForWritePosition(AbstractBytes.java:968)
	at net.openhft.chronicle.bytes.MappedBytes.append8bit0(MappedBytes.java:647)
	at net.openhft.chronicle.bytes.MappedBytes.append8bit(MappedBytes.java:620)
	at net.openhft.chronicle.bytes.MappedBytes.append8bit(MappedBytes.java:46)
	at net.openhft.chronicle.bytes.ByteStringAppender.append8bit(ByteStringAppender.java:249)
	at net.openhft.chronicle.wire.BinaryWire.writeField0(BinaryWire.java:1170)
	at net.openhft.chronicle.wire.BinaryWire.writeField(BinaryWire.java:1154)
	at net.openhft.chronicle.wire.BinaryWire.write(BinaryWire.java:1113)
	at net.openhft.chronicle.wire.WireMarshaller$FieldAccess.write(WireMarshaller.java:509)
	at net.openhft.chronicle.wire.WireMarshaller.writeMarshallable(WireMarshaller.java:197)
	at net.openhft.chronicle.wire.Wires.writeMarshallable(Wires.java:329)
	at net.openhft.chronicle.wire.BinaryWire$FixedBinaryValueOut.marshallable(BinaryWire.java:1879)
	at net.openhft.chronicle.wire.ValueOut.typedMarshallable(ValueOut.java:451)
	at net.openhft.chronicle.wire.ValueOut.object(ValueOut.java:671)
	at net.openhft.chronicle.wire.ValueOut.object(ValueOut.java:516)
	at net.openhft.chronicle.wire.WireMarshaller$ObjectFieldAccess.getValue(WireMarshaller.java:663)
	at net.openhft.chronicle.wire.WireMarshaller$FieldAccess.write(WireMarshaller.java:516)
	at net.openhft.chronicle.wire.WireMarshaller.writeMarshallable(WireMarshaller.java:197)
	at net.openhft.chronicle.wire.Wires.writeMarshallable(Wires.java:329)
	at net.openhft.chronicle.wire.ValueOut.lambda$object$20(ValueOut.java:689)
	at net.openhft.chronicle.wire.BinaryWire$FixedBinaryValueOut.marshallable(BinaryWire.java:1840)
	at net.openhft.chronicle.wire.ValueOut.object(ValueOut.java:689)
	at net.openhft.chronicle.wire.ValueOut.object(ValueOut.java:516)
	at net.openhft.chronicle.wire.WireMarshaller$ObjectFieldAccess.getValue(WireMarshaller.java:663)
	at net.openhft.chronicle.wire.WireMarshaller$FieldAccess.write(WireMarshaller.java:516)
	at net.openhft.chronicle.wire.WireMarshaller.writeMarshallable(WireMarshaller.java:197)
	at net.openhft.chronicle.wire.Wires.writeMarshallable(Wires.java:329)
	at net.openhft.chronicle.wire.BinaryWire$FixedBinaryValueOut.marshallable(BinaryWire.java:1879)
	at net.openhft.chronicle.wire.ValueOut.typedMarshallable(ValueOut.java:451)
	at net.openhft.chronicle.wire.ValueOut.object(ValueOut.java:671)
	at net.openhft.chronicle.wire.ValueOut.object(ValueOut.java:516)
	at net.openhft.chronicle.wire.WireMarshaller$ObjectFieldAccess.getValue(WireMarshaller.java:663)
	at net.openhft.chronicle.wire.WireMarshaller$FieldAccess.write(WireMarshaller.java:516)
	at net.openhft.chronicle.wire.WireMarshaller.writeMarshallable(WireMarshaller.java:197)
	at net.openhft.chronicle.wire.Wires.writeMarshallable(Wires.java:329)
	at net.openhft.chronicle.wire.ValueOut.lambda$object$20(ValueOut.java:689)
	at net.openhft.chronicle.wire.BinaryWire$FixedBinaryValueOut.marshallable(BinaryWire.java:1840)
	at net.openhft.chronicle.wire.ValueOut.object(ValueOut.java:689)
	at net.openhft.chronicle.wire.ValueOut.object(ValueOut.java:516)

These are my full Gradle dependencies:

dependencies {
	implementation group: 'net.openhft', name: 'chronicle-logger', version: '4.21ea1', ext: 'pom'
	implementation group: 'net.openhft', name: 'chronicle-logger-slf4j', version: '4.21ea1'
	implementation group: 'net.openhft', name: 'chronicle-logger-core', version: '4.21ea1'
	implementation 'net.openhft:chronicle-logger-tools:4.20ea2'
	
	implementation group: 'commons-io', name: 'commons-io', version: '2.8.0'
	implementation group: 'mysql', name: 'mysql-connector-java', version: '8.0.22'
	implementation group: 'org.apache.commons', name: 'commons-lang3', version: '3.11'
	implementation group: 'com.zaxxer', name: 'HikariCP', version: '3.4.5'
	implementation group: 'org.slf4j', name: 'slf4j-api', version: '1.7.30'
	implementation group: 'io.netty', name: 'netty-all', version: '4.1.53.Final'
	implementation group: 'org.jctools', name: 'jctools-core', version: '3.1.0'
	implementation group: 'it.unimi.dsi', name: 'fastutil', version: '8.4.3'
}

And my chronicle-logger.properties:

# shared properties
chronicle.base=data/chronicle-logs/
# logger : default
chronicle.logger.root.path=data/chronicle-logs/
chronicle.logger.root.level=debug
# optional tweaks
chronicle.logger.root.cfg.rollCycle=SMALL_DAILY

Chronicle logger tools not reading logs

I've switched from Simple Logger to chronicle-logger-slf4j is an implementation of SLF4J API and seems it works so logs files are created and it's not empty. But now I'm trying to read the file using ChroniCat shipped in chronicle-logger-tools.

I tried the standard approach described in the docs that is

mvn exec:java -Dexec.mainClass="net.openhft.chronicle.logger.tools.ChroniCat" -Dexec.args="…​"

but it required a pom.xml file. Even I create one it still with Unauthorized (401) reason

Failed to execute goal on project chronicle-execution-engine.logger: Could not resolve dependencies for project chronicle-execution-engine:chronicle-execution-engine.logger:jar:1: Failed to collect dependencies at net.openhft:chronicle-logger-core:jar:4.20.1 -> net.openhft:chronicle-wire:jar:2.20.4 -> net.openhft:chronicle-core:jar:2.20.6-SNAPSHOT: Failed to read artifact descriptor for net.openhft:chronicle-core:jar:2.20.6-SNAPSHOT: Could not transfer artifact net.openhft:chronicle-core:pom:2.20.6-SNAPSHOT from/to chronicle-enterprise-snapshots (https://nexus.chronicle.software/content/repositories/snapshots): status code: 401, reason phrase: Unauthorized (401)

Another way I tried is running ChroniCat in the Java directly. My code is

public static void main(String[] args) throws FileNotFoundException {
        ChroniCat.main(new String[]{"./slf4j.chronicle.base/main"});
}

In this case I've got an error

java.lang.IllegalArgumentException: primitive: void
at net.openhft.chronicle.core.util.ObjectUtils.supplierForClass(ObjectUtils.java:96)
at net.openhft.chronicle.core.ClassLocal.computeValue(ClassLocal.java:54)
at java.base/java.lang.ClassValue.getFromHashMap(ClassValue.java:228)
at java.base/java.lang.ClassValue.getFromBackup(ClassValue.java:210)
at java.base/java.lang.ClassValue.get(ClassValue.java:116)
at net.openhft.chronicle.core.util.ObjectUtils.newInstance(ObjectUtils.java:409)
at net.openhft.chronicle.wire.WireInternal.throwable(WireInternal.java:246)
at net.openhft.chronicle.wire.ValueIn.throwable(ValueIn.java:494)
at net.openhft.chronicle.logger.tools.ChronicleLogReader.processLogs(ChronicleLogReader.java:122)
at net.openhft.chronicle.logger.tools.ChroniCat.main(ChroniCat.java:42)
at com.myorg.DumpLog.main(DumpLog.java:9)

Can someone share the way to dump logs to STDOUT?

Unable to upgrade to slf4j 2.x

I am able to use the slf4j 1.7.22 to generate logs using chronicle logger. When I upgrade to slf4j to 2.0.3 I get the following error:

SLF4J: No SLF4J providers were found.
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See https://www.slf4j.org/codes.html#noProviders for further details.
SLF4J: Class path contains SLF4J bindings targeting slf4j-api versions 1.7.x or earlier.
SLF4J: Ignoring binding found at [jar:file:/Users/user/.gradle/caches/modules-2/files-2.1/net.openhft/chronicle-logger-slf4j/4.22ea2/c427ed8d94520a0230231905bd1b37cea411b1ce/chronicle-logger-slf4j-4.22ea2.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See https://www.slf4j.org/codes.html#ignoredBindings for an explanation.

It seems that the loading mechanism for slf4j has changed in the 2.x release. Will chronicle logger be supporting this change? The chronicle logger documentation state is supports slf4j 1.7 and above, but I assume that was written prior to the 2.x changes.

Performance

Hi,

I created a project to measure the performance of Java loggers. The project uses JMH and measures the performance on different machines and configurations. You can find the results here: https://github.com/OpenElements/java-logger-benchmark

For some setups (Linux box with 40 vCPUs, 250 GB RAM, local SSD, 240 threads logging) the performance of Chronicle-Logger is really bad. You can find the results on the README of the project. Can you have a look at the repo and check if I did anything wrong. I do not want that numbers to be shared widely before you had a change to check if that is correct.

fixed test net.openhft.chronicle.logger.jul.JulLoggerChronicleTest

see https://teamcity.chronicle.software/viewLog.html?buildId=642270&tab=buildResultsDiv&buildTypeId=OpenHFT_BuildAll_BuildJava11compileJava11

image

java.lang.AssertionError: expected:<logger> but was:<null>
	at net.openhft.chronicle.logger.jul.JulLoggerChronicleTest.testChronicleConfiguration(JulLoggerChronicleTest.java:58)
	at net.openhft.chronicle.logger.jul.JulLoggerChronicleTest.testChronicleConfig(JulLoggerChronicleTest.java:91)
------- Stderr: -------
Unable to initialize chronicle-logger-jul (logger)
  Unable to claim exclusive exclusive lock on file /tmp/chronicle-jul-api/root-binary/metadata.cq4t

Compiled Jar

Hi I was wonder if you have a complied JAR instead of a pom.xml I can use for my project! Many thanks!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.