Git Product home page Git Product logo

serilog-sinks-splunk's People

Contributors

albertromkes avatar avireddy02 avatar blackbaud-jeffdye avatar brettjaner avatar codernumber1 avatar danielwertheim avatar dependabot[bot] avatar diegofrata avatar eeparker avatar engrajabi avatar fred2u avatar havagan avatar jpfifer avatar juanjofuchs avatar maximrouiller avatar merbla avatar merbs-splunk avatar nblumhardt avatar patriklindstrom avatar pedroreys avatar pixel-m avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

serilog-sinks-splunk's Issues

What is the default sourceType?

I am a little new with Splunk, I have asked our admin to generate a HEC token for use in my app, and he asked what Source Type the HEC Token needed to be configured for. Does this Sink have a default that I can give my admin? Will "JSON" work for now?

This is a default ASPNET Core 3.1 app.

Thanks

no httprequests for dotnet core app in docker container

I have a dotnet core RC2 application that is using serilog-sinks-splunk v2.0.0 to send all log entries to a Splunk server 6.3 Enterprise. When the application is running locally (not in a docker container), http requests are sent successfully. I can see them in Fiddler and my entries are viewable in Splunk. However when the same application is in a docker container, no I see no requests in Fiddler and nothing in Splunk. Has anyone had this problem?

PackageReference for Serilog & System.Net.Http Only Applied to netstandard1.3

In the Serilog.Sinks.Splunk.csproj, the PackageReference for Serilog & System.Net.Http is only applied to netstandard1.3

<ItemGroup Condition=" '$(TargetFramework)' == 'netstandard1.3' ">
    <PackageReference Include="Serilog" Version="2.6.0" />
    <PackageReference Include="System.Net.Http" Version="4.3.3" />
</ItemGroup>

<ItemGroup>
    <PackageReference Include="Serilog.Sinks.PeriodicBatching" Version="2.1.1" />
</ItemGroup>

Then in my NET45 application where I'm consuming the Serilog.Sinks.Splunk package targeting netstandard1.1, it pulls in Serilog v2.0.0 because that is the version Serilog.Sinks.PeriodicBatching is referencing. System.Net.Http is left out. However, all still compiles so maybe System.Net.Http package reference isn't required? Or maybe it's just because I'm on .NET Framework and there is a framework assembly for System.Net.Http?

Also, there was a pull request to remove the System.Net.Http package reference for .NET Framework applications #77 a while back. If they update to Serilog.Sinks.Splunk to v3.0.0, won't they now be broken?

Unable to add an outputTemplate

Hi,
Option to add outputTemplate is not available in version 3.4.0. Is there any way to have this option available or is this no more supported ?

Any work/thoughts on Core CLR support?

Has anything been done for this sink to determine the level of effort or tasks to support Core CLR & Serilog 2.0.0? Even if somebody had a list of general things that might need to change when upgrading a sink.

I'm not in any immediate need, but it would be kind of fun to see it running. If I get some time in the next few months I might take a crack at it, but I don't want to duplicate efforts if anyone has done any background work.

Colin

HttpClient.SendAsync not working in EventCollectorSink.cs

Hi,
I have used this reference to push application logs to splunk but I ended up with error.
Actually when I went through repo I found EventCollectorSink.cs had EventCollectorRequest
where actual httpclient properties are configured after that next line _httpClient.SendAsync(request).ConfigureAwait(false); Here I got error every time even though passed all parameters like splunk Uri and Token.
Always throwing exception Error occurred while sending request.
Could please let me know why it is behaving like this.
Please help on this.Waiting to hear from You
Thanks.

Fallback if network connection is down

Hi,

I wonder if there are any plans on adding a fallback if the network connection is down? If the log messages cannot be sent across the network, I would like to persist the events to file using CompatchJsonTextFormatter and have an agent that probes for any files and send them to the server when the network connection is restored.

Great work!

HttpEventlogCollector and sourceType

When posting an entry with the 'sourceType' parameter set, I'll get a 404 bad request back from Splunk (version 6.4.1). If I change 'sourceType' into 'sourcetype' (all lowercase), it starts working.

I'm not sure if this is a setting in Splunk itself or a bug in this sink. If it's a bug, let me know. Then I'll create a pull request with a fix.

AuditTo sink

When using the AuditTo the events should be written immediately and not batched. Since the splunk sink is based on PeriodicBatchingSink the events are batched according to parameters that defines batch count and interval. For now I've downloaded the code for the sink implementation and call it directly in my abstraction, so this isn't a stopping one.

This should perhaps be in the serilog-sinks-periodbatching since the behavior from AuditTo should be propagated down the the sink implementation.

Behind the scene something like SinkEmitBehavior could be introduced as passed a strategy to concerned parts to allow for honoring the desired functionality for AuditTo contract.

Edit I looked how it was done for mssql a certain sink is created for this. But the behavior for auditing spreads in different parts as well, for example PropertyValueConverter (#800), so to refactor this parts make sense.

Timestamp not picked up by Splunk Cloud

I'm using Splunk Cloud with the httpevent input and the standard SplunkJsonFormatter where the timestamp from Serilog is outputted as a parent property called time. However it seems that Splunk (Cloud) isn't picking this up, Splunk adds its own time (when the data is received). I'm not sure what the correct solution is, my peers are saying that the timestamp has to be added inside the payload (like Level is), but I'm not sure.

Compress http content

Hi,

I had a custom implementation to send logs to Splunk and I was sending lots of logs to Splunk from AWS ec2 instances through a NAT gateway and not all the ec2 instances where in the same region as the NAT gateway. What this means is that I was being charged for a significant amount just in data transfer costs.

We ended up changing the networking topology for our solution but in the initial step what we did that helped reduce the costs significantly was to compress the content before sending to Splunk.

So our code to send to Splunk with compression looked like:

private async Task<HttpResponseMessage> Send(string splunkPayload)
{
    using (var request = new HttpRequestMessage(HttpMethod.Post, _url))
    {
        request.Headers.Authorization = new AuthenticationHeaderValue("Splunk", _hostConfiguration.Token);
        using (var compressedStream = await CompressWithGzipAsync(splunkPayload))
        {
            request.Content = new StreamContent(compressedStream);
            request.Content.Headers.Add("Content-Type", "application/json; charset=utf-8");
            request.Content.Headers.Add("Content-Encoding", "gzip");
            var response = await _httpClient.SendAsync(request, HttpCompletionOption.ResponseHeadersRead, CancellationToken.None);
            return response;
        }
    }

    async Task<MemoryStream> CompressWithGzipAsync(string plaintext)
    {
        var output = new MemoryStream();
        using (var input = new MemoryStream(Encoding.UTF8.GetBytes(plaintext)))
        {
            using (GZipStream compressor = new GZipStream(output, CompressionLevel.Optimal, leaveOpen: true)) //disposing GZipStream guarantees a flush is made and all data is copied to the output stream
            {
                await input.CopyToAsync(compressor);
            }
        }
        output.Seek(0, SeekOrigin.Begin); // after a flush has been guaranteed by the dispose (could be explicit flush though) make sure to position the stream in the beginning
        return output;
    }
}

The important part is the method CompressWithGzipAsync where the splunk payload (which for my custom implementation, and I believe for serilog-sinks-splunk as well, can be one or many messages batched) would be compressed as opposed to just doing https://github.com/serilog/serilog-sinks-splunk/blob/0bc82f5492154b24b5054abe5809df1b525662ec/src/Serilog.Sinks.Splunk/Sinks/Splunk/EventCollectorRequest.cs#L33

Is this an enhancement that you believe is worthwhile adding?

I can't speak for compatibility to all Splunk versions in terms of whether they all accept compressed content or not, or if something in Splunk needs to be enabled to accept compressed content. However I believe this is a worthwhile enhancement which could be toggled on or off via an extra parameter added to the LoggerSinkConfiguration.EventCollector methods.
I mean add a "bool compressContent" to SplunkLoggingConfigurationExtensions methods such as: https://github.com/serilog/serilog-sinks-splunk/blob/0bc82f5492154b24b5054abe5809df1b525662ec/src/Serilog.Sinks.Splunk/SplunkLoggingConfigurationExtensions.cs#L55-L71

No request is made, no error thrown

Steps

dotnet new console -n serilog-splunk-issue
cd serilog-splunk-issue
dotnet add package serilog
dotnet add package serilog.sinks.splunk

Program.cs:

using System;
using Serilog;

namespace serilog_splunk_issue
{
    class Program
    {
        static void Main(string[] args)
        {
            Serilog.Debugging.SelfLog.Enable(Console.Error);
            
            var log = new LoggerConfiguration()
                .WriteTo.EventCollector("http://192.168.0.11:8088/services/collector", "ea6595dc-72f2-40fd-94c5-c93300dc0158")
                .CreateLogger();

            log.Information("test");
        }
    }
}
dotnet run

Result

No console output.

No records were added.

Fiddler doesn't show any request to 192.168.0.11

Fix CI on Travis

Currently targeting latest and Preview2

https://travis-ci.org/serilog/serilog-sinks-splunk

This should only target required versions.

Issue with OSX and OpenSSL

==> Downloading https://homebrew.bintray.com/bottles/openssl-1.0.2h_1.yosemite.b
==> Pouring openssl-1.0.2h_1.yosemite.bottle.tar.gz
==> Caveats
A CA file has been bootstrapped using certificates from the system
keychain. To add additional certificates, place .pem files in
  /usr/local/etc/openssl/certs
and run
  /usr/local/opt/openssl/bin/c_rehash
This formula is keg-only, which means it was not symlinked into /usr/local.
Apple has deprecated use of OpenSSL in favor of its own TLS and crypto libraries
Generally there are no consequences of this for you. If you build your
own software and it requires this formula, you'll need to add to your
build variables:
    LDFLAGS:  -L/usr/local/opt/openssl/lib
    CPPFLAGS: -I/usr/local/opt/openssl/include
==> Summary
๐Ÿบ  /usr/local/Cellar/openssl/1.0.2h_1: 1,691 files, 12.0M
Warning: Refusing to link: openssl
Linking keg-only openssl means you may end up linking against the insecure,
deprecated system OpenSSL while using the headers from Homebrew's openssl.
Instead, pass the full include/library paths to your compiler e.g.:
  -I/usr/local/opt/openssl/include -L/usr/local/opt/openssl/lib

Allow setting a proxy

I'm currently working on a project for a client where all outbound HTTP calls need to go through a proxy server. I see that an HttpClient is used internally in the sink, but there's no injection point for a HttpMessageHandler anywhere. I suggest another optional parameter in the EventCollector constructor and extension method: proxyAddress, which is exactly what it looks like. This can then be passed in to the EventCollectorClient constructor and be used to create a HttpClientHandler using the provided proxy.

I'd be happy to create a PR for this.

Timestamp property is superfluous

The timestamp property is superfluous due to the way Splunk processes _time (epoch).

This should be removed

{ [-] 
    Level:  Information 
    Properties: { [+] 
   } 
    RenderedMessage:  Running no template loop 10 
    Timestamp:  2016-07-30T09:26:33.5245230+10:00 
}

AWS Serverless Lambda - synchronous logs

Hello

I was using this sink to log to Splunk from my C# Serverless Lambda. We found that the platform kills all background threads after the API response is sent from the controller. This was causing some logs to go missing. Would be great if you could add an additional parameter that makes all the calls to Splunk synchronous in place of the periodic batching.

Thanks

Update to use VS2017/.Net Core tooling

I get errors like: src\sample\project.json: Failed to migrate XProj project Sample. Could not find project.json at L:\Dev\serilog-sinks-splunk\src\sample\project.json.
and
serilog-sinks-splunk.sln: Visual Studio needs to make non-functional changes to this project in order to enable the project to open in released versions of Visual Studio newer than Visual Studio 2010 SP1 without impacting project behavior.

Split sinks into separate packages/repositories

Currently the Splunk package caters for three sinks:

  • HEC (Supports netstandard1.1 & netstandard1.3)
  • TCP (Supports net45 & depends on Splunk.Logging.Common which depends on Newtonsoft.Json >= 6.0.8)
  • UCP (Supports net45 & depends on Splunk.Logging.Common which depends on Newtonsoft.Json >= 6.0.8)

This however introduces issues such as limitations relating to signing dlls (see: #76) & .NetCore support across all sinks (see: #65). In addition, there is not a clear indication of usage of each sink.

To allow the related sinks to evolve independently, this is a proposal to split the sinks into isolated packages.

Notes/Considerations:

  • Currently there is not a significant amount of re-used logic/code across sinks
  • The majority of features/issues relate to the HEC sink
  • Release of future sinks (nova etc.)

Option 1

Status Quo, all three sinks packaged together and we deal with the issues mentioned

Option 2

Split sinks and packages into separate repos/packages

  • Serilog.Sinks.Splunk (HEC Sink) appears to be the most used sink and preference of data ingestion from Splunk itself.
  • Serilog.Sinks.Splunk.Tcp (TCP Sink)
  • Serilog.Sinks.Splunk.Udp (UDP Sink)

Option 3

Introduce meta-packages

  • Serilog.Sinks.Splunk (Core functionality - not much)
  • Serilog.Sinks.Splunk.HEC (HEC Sink)
  • Serilog.Sinks.Splunk.Tcp (TCP Sink)
  • Serilog.Sinks.Splunk.Udp (UDP Sink)

Thoughts? Other ideas?

Allow Splunk Sink to reuse HttpClient

The sink currently allows reuse of the HttpMessageHandler
(https://github.com/serilog/serilog-sinks-splunk/blob/dev/src/Serilog.Sinks.Splunk/Sinks/Splunk/EventCollectorSink.cs#L92)

Would it be possible to also allow reuse of the full HttpClient? The EventCollectorClient is a thin implementation of HttpClient and either the sink or the user would be on the hook to handle the authentication header.

The request stems from the usage of the Sink within the execution context of an Azure Function, where best practice includes reuse of the HttpClient.

Currently, I'm seeing occasional SocketExceptions when using the Splunk Sink within an Azure Function that I believe are stemming from the construction of a new HttpClient by the Sink during every Function invocation.

System.Threading.Tasks.TaskCanceledException

Hi,
I am working with Serilog.Sinks.Splunk's Event Collector and I am getting the "Task Canceled Exception". I think I need to increase the timeout for Http request. Is there any way I can do it ?

Async logging

Does this support async logging of the events to splunk ?

.Net core 3.1 support

Hi,

I am having trouble when using .net core 3.1. There is no error messages or exceptions anywhere that i can see but the logs does not appear in my splunk index. Same setup works fine in .net core 2.1. Any ideas?

Regards
Mathias

Method Not Found ...EventCollector

I'm getting this logging error on a couple servers, but not on the majority (> 50). It is affecting a Windows Server 2016 Standard with .Net 4.6.2 and a Windows Server 2012 R2 Standard .Net 4.6.1. I am not aware of any differences specific to these servers.

I have updated all the NuGet packages and I'm not sure what else to look at.

System.MissingMethodException: Method not found: 'Serilog.LoggerConfiguration Serilog.SplunkLoggingConfigurationExtensions.EventCollector(Serilog.Configuration.LoggerSinkConfiguration, System.String, System.String, System.String, System.String, System.String, System.String, System.String, Serilog.Events.LogEventLevel, System.IFormatProvider, Boolean, Int32, Int32, System.Nullable`1, System.Net.Http.HttpMessageHandler, Serilog.Core.LoggingLevelSwitch)'.

Add AppSettings Configuration ability

The logic to allow for easy appsettings configuration doesn't work with complex types. Its easy enough to add this extension to my own assembly but wonder if it should be provided by default

    public static LoggerConfiguration Splunk(this LoggerSinkConfiguration sinkConfiguration, string host, int port)
    {
        return sinkConfiguration.SplunkViaTcp(new Serilog.Sinks.Splunk.SplunkTcpSinkConnectionInfo(host, port));
    }

Or maybe im doing something wrong ha!

System.Net.Http NuGet Reference in .NET Framework 4.7.1

Hey y'all, using this sink in a full .net framework app (even 4.7.1) seems to pull in the System.Net.Http nuget package (v4.3.2) which can cause all kinds of reference errors and assembly binding issues, along with dependencies on a few System.Security.* packages too. In this case, I think we'd rather use the actual Framework System.Net.Http.

This is similar to this issue in another sink: datalust/serilog-sinks-seq#83
Changeset which resolved that issue: https://github.com/serilog/serilog-sinks-seq/pull/85/files

Would it be possible to remove that package for Framework specifically for 4.7+?

Addition Samples

I have this Splunk setting in Log4Net
Not sure how to do in serilog
Specifically the meta data in facility and identity shown in the configuration below
<log4net> <root> <level value="DEBUG" /> <appender-ref ref="syslogger" /> </root> <appender name="syslogger" type="log4net.Appender.RemoteSyslogAppender"> <RemoteAddress value="logs.chugachelectric.com" /> <RemotePort value="514" /> <facility value="daemons" /> <identity value="%P{log4net:HostName} %logger" /> <layout type="log4net.Layout.PatternLayout" value="[%level] %m%n" /> </appender> </log4net>

upgrade to 3.5.0 in aspnet core web api project causes exception on configuraiton

Hi,
I clicked the simple 'package manager' update inside visual studio to upgrade from 3.4.0 to 3.5.0 and now the configuration throws the following error:

System.ArgumentOutOfRangeException
HResult=0x80131502
Message=queue limit must be positive (Parameter 'queueLimit')
Source=Serilog.Sinks.PeriodicBatching
StackTrace:
at Serilog.Sinks.PeriodicBatching.BoundedConcurrentQueue1..ctor(Int32 queueLimit) at Serilog.Sinks.PeriodicBatching.PeriodicBatchingSink..ctor(Int32 batchSizeLimit, TimeSpan period, Int32 queueLimit) at Serilog.Sinks.Splunk.EventCollectorSink..ctor(String splunkHost, String eventCollectorToken, String uriPath, Int32 batchIntervalInSeconds, Int32 batchSizeLimit, Nullable1 queueLimit, ITextFormatter jsonFormatter, HttpMessageHandler messageHandler)
at Serilog.Sinks.Splunk.EventCollectorSink..ctor(String splunkHost, String eventCollectorToken, String uriPath, String source, String sourceType, String host, String index, Int32 batchIntervalInSeconds, Int32 batchSizeLimit, Nullable1 queueLimit, IFormatProvider formatProvider, Boolean renderTemplate, HttpMessageHandler messageHandler) at Serilog.SplunkLoggingConfigurationExtensions.EventCollector(LoggerSinkConfiguration configuration, String splunkHost, String eventCollectorToken, String uriPath, String source, String sourceType, String host, String index, LogEventLevel restrictedToMinimumLevel, IFormatProvider formatProvider, Boolean renderTemplate, Int32 batchIntervalInSeconds, Int32 batchSizeLimit, Nullable1 queueLimit, HttpMessageHandler messageHandler, LoggingLevelSwitch levelSwitch)

Runs again fine if I back it back down to 3.4.0. Is there a specific step need to maintain the upgrade?

My code:
(pretty stock Startup.cs, inside Startup.Startup(IConfiguration configuration) {}

Log.Logger = new LoggerConfiguration()
.WriteTo.Console()
.WriteTo.EventCollector(
Configuration.GetValue("Splunk:splunkHost")
, Configuration.GetValue("Splunk:eventCollectorToken")
, sourceType: "docgen")
.CreateLogger();

Thanks

Merge System.Net.Http reference changes from dev branch to master?

Hey, I've done some testing with the prerelease version of this package after having picked up the System.Net.Http reference changes in the dev branch and verified that they allow us to remove the reference to the System.Net.Http nuget package as we wanted, which is great news!

However, I was wondering what all needed to happen to bring those changes over to the master branch? My team is hesitant to ship changes using prerelease versions of nugets, and I saw there were a few other commits that dev had which master did not. Thanks!

Possible thread leak when ILogger instances are disposed.

I have a web application which spins up and disposes of logs as needed while it's running. After adding in the Splunk sink I noticed the application running slower with a much higher CPU utilization. It looked like the application's thread count would just grow constantly. Looking at the EventCollectorSink constructor I notice it calls RepeatAction.OnInterval(TimeSpan pollInterval, Action action, CancellationToken token) but doesn't cancel the returned task on dispose. My guess is then each time an ILogger is spun up a new task/thread is created and it continues to attempt to run after the ILogger is disposed until the web app recycles. I wonder if the returned task could be held in a private field by the sink and cancelled on dispose or maybe even let that sink inherit from the Periodic Batch Sink and let that deal with batching log messages.

https://github.com/serilog/serilog-sinks-splunk/blob/092d929ea92624f98970844e46e91e7878db54b4/src/Serilog.Sinks.Splunk/Sinks/Splunk/EventCollectorSink.cs#L193

https://github.com/serilog/serilog-sinks-splunk/blob/092d929ea92624f98970844e46e91e7878db54b4/src/Serilog.Sinks.Splunk/Sinks/Splunk/RepeatAction.cs#L26

Add strongname to assembly

Can we add a strong name signing key to this assembly. I can't include it in my project that has a strong name.

Using Serilog.Sinks.Splunk when Network Connectivity Might Be Unavailable or Metered at Times?

As Serilog.Sinks.Splunk has a dependency on Serilog.Sinks.Periodic batching, a related issue (#49) has also been submitted under Serilog.Sinks.PeriodicBatching.

What might be a "best practice" approach for using Serilog.Sinks.Splunk when the device that is running the software from which the logs are collected might be offline and/or on a metered connection for substantial portions of time? Ideally, logs would only be forwarded to Splunk when on an unmetered connection.

In looking at the code, Serilog.Sinks.Splunk.EventCollectorSink inherits from Serilog.Sinks.PeriodicBatching.PeriodicBatchingSink.

In turn, Serilog.Sinks.PeriodicBatching.PeriodicBatchingSink instantiates a captive instance of the non-public class Serilog.Sinks.PeriodicBatching.BatchedConnectionStatus (e.g. no opportunity to inject an alternative implementation).

Several values within BatchConnectionStatus appear to be hard-coded, including the following:

  • MinimumBackoffPeriod
  • MaximumBackoffInterval
  • FailuresBeforeDroppingBatch
  • FailuresBeforeDroppingQueue

I understand reasons and use cases for backoff logic, but in scenarios with part of the time spent in offline or metered connectivity, a backoff scenario that is also dropping batches and queues would not be desirable. (Also, logs aren't the only data, being collected, and backend systems are designed for the type of usage in question.)

The following perhaps might be a ways to enable such capability with minimal disturbance to the existing code base, but might others more familiar with the code have other suggestions?

  • Splunk.EventCollectorSink appears to be using an obsolete constructor of its parent PeriodicBatchingSink; update Splunk.EventCollectorSink to be able to accept and pass through to PeriodicBatchingSink an instance of PeriodicBatchingSyncOptions.
  • Add the following properties to PeriodicBatchingSinkOptions:
    • MinimumBackoffPeriod : also provide a way to disable?
    • MaximumBackoffInterval : also provide a way to disable?
    • FailuresBeforeDroppingBatch : also provide a way to disable?
    • FailuresBeforeDroppingQueue : also provide a way to disable?
    • ShouldAttempt: this would be a lambda expression that would evaluate to a Boolean and would facilitate providing a custom set of criteria to use in determining whether or not to attempt to connect (e.g. if offline or on a metered connection, don't attempt).
  • Update PeriodicBatchingSink.OnTick() to evaluate ShouldAttempt before executing the try/catch block that currently exists.

Support for file buffer?

Is there support for a file buffer if the http service is down.
The seq Sink implements this.

`Method not found` while configuring sink with `WriteTo.EventCollector`

When working in the context of Azure Functions, either locally using func cli (func host start) or once uploaded to Azure, sometimes, apparently with a certain combination of various packages/target frameworks/etc., I get the following exception during a call to create logger:

Exception while executing function: Functions.SomeFunc
Microsoft.Azure.WebJobs.Host.FunctionInvocationException : Exception while executing function: Functions.SomeFunc ---> System.AggregateException : One or more errors occurred. ---> Method not found: 'Serilog.LoggerConfiguration Serilog.SplunkLoggingConfigurationExtensions.EventCollector(Serilog.Configuration.LoggerSinkConfiguration, System.String, System.String, System.String, System.String, System.String, System.String, System.String, Serilog.Events.LogEventLevel, System.String, System.IFormatProvider, Boolean, Int32, Int32, System.Net.Http.HttpMessageHandler)'.
   at Microsoft.Azure.WebJobs.Script.Description.DotNetFunctionInvoker.GetTaskResult(Task task) at C:\azure-webjobs-sdk-script\src\WebJobs.Script\Description\DotNet\DotNetFunctionInvoker.cs : 455
   at System.Threading.Tasks.ContinuationResultTaskFromTask`1.InnerInvoke()
   at System.Threading.Tasks.Task.Execute()
   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
   at async Microsoft.Azure.WebJobs.Script.Description.DotNetFunctionInvoker.InvokeCore(Object[] parameters,FunctionInvocationContext context) at C:\azure-webjobs-sdk-script\src\WebJobs.Script\Description\DotNet\DotNetFunctionInvoker.cs : 276
   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
   at async Microsoft.Azure.WebJobs.Script.Description.FunctionInvokerBase.Invoke(Object[] parameters) at C:\azure-webjobs-sdk-script\src\WebJobs.Script\Description\FunctionInvokerBase.cs : 95
   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
   at async Microsoft.Azure.WebJobs.Host.Executors.VoidTaskMethodInvoker`2.InvokeAsync[TReflected,TReturnType](TReflected instance,Object[] arguments)
   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
   at async Microsoft.Azure.WebJobs.Host.Executors.FunctionInvoker`2.InvokeAsync[TReflected,TReturnValue](Object instance,Object[] arguments)
  โ€ฆ

The call producing the error is:

return new LoggerConfiguration()
    .Enrich.WithMachineName()
    .WriteTo.EventCollector(host, token)
    .WriteTo.RollingFile( ... )
    .CreateLogger()

The whole execution fails at this point, obviously. Commenting out the EventCollector line gets rid of the exception and the RollingFile continues to work fine.

It appears a few mentions of a similar problem can be found elsewhere, specifically:

It looks like in most if not all cases there's some sort of package versioning problem. For me it works when I use the following combination:

  • Serilog.Sinks.Splunk v2.3.0
  • Microsoft.Extensions.DependencyInjection v1.1.1

Upgrades are available for both packages listed above. Upgrading either or both causes the exception.

Support for Sink config from appsettings.json

Hello - thanks for the library.

I was wondering if you support reading the sink config from the configuration files using Serilog.Settings.Configuration? I can't seem to get the code below to add the sink? Am I doing it wrong or is it not supported? Or could you point me to some docs?

appsettings.json

"WriteTo": [
      {
          "Name": "EventCollector",
          "Args": {
             "splunkHost": "https://myhost/services/collector",
             "eventCollectorToken": "0000"
          }
       }
    ],

program.cs

Log.Logger = new LoggerConfiguration()
                .ReadFrom.Configuration(configuration)
                .CreateLogger();

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.