segmentio / analytics-java Goto Github PK
View Code? Open in Web Editor NEWThe hassle-free way to integrate analytics into any java application.
Home Page: https://segment.com/libraries/java
The hassle-free way to integrate analytics into any java application.
Home Page: https://segment.com/libraries/java
This is pulling in lots of random junk.
We recently found some bugs in our code that integrates with the Analytics java library that would be impossible to catch in our unit tests with Mocks, because we would have to know details of the actual Analytics library implementation.
What we could use is a "stub" implementation of your library that, rather than connecting to the Segment IO service, runs an embedded mock of the service that simulates the service and rejects / fails invalid usage of the API. I.e. missing / incomplete data, malformed input, etc.
Thoughts?
It appears that once the .properties(map)
call is made, the value is treated as read-only for the remainder of its lifetime. If the library is not adding to the properties anywhere, it would be much easier (especially with dynamic languages such as Groovy) to change the signature to Map<String,?>
, since the inferred type of most inline maps is either <String,Serializable>
or <String,String>
.
How can I achieve mixpanel.people.increment() on segment java library?
Waiting for sonatype hosting .. https://issues.sonatype.org/browse/OSSRH-5178
Google Appengine has restrictions around spawning threads, so the current implementation of Flusher.java doesn't work right in app engine. See this page's "Threads" section for more details on how threads may be used in Appengine: https://cloud.google.com/appengine/docs/java/
Similar to Android, it would be awesome to update the Java API. The goal is to make it more flexible, and customizable so that we could load environments into it, this way the Android SDK will simply be an extension of the Java SDK, and we could release more environments so users can get started quicker on those.
We'll still maintain the Android SDK - which aims to have as few dependencies as possible. This will be a good alternative to people not using bundled integrations - it does much less work!
Analytics analytics = new Analytics.Builder(writeKey)
.channel(MOBILE|SERVER|BROWSER)
.client(client)
.converter(GsonConverter|JacksonConverter|JsonObjectConverter)
.queue(queue)
.logger(logger)
.options(options)
.context(context)
.listener(listener)
.build();
This is same as defined from the spec, and will be an enum. The platform may be able to load this automatically.
enum Channel {
MOBILE, SERVER, BROWSER
}
This is an abstraction for a HTTP networking client. Most server environments let user's choose one, so we should let users use that instead of forcing them into our client. We'll provide implementations for at least HttpUrlConnection and OkHttp. For the Android SDK, we'll automatically load HttpUrlConnectionClient, or OkHttp if available.
interface Client {
/**
* Synchronously execute an HTTP represented by {@code request} and encapsulate all response data
* into a {@link Response} instance.
*/
Response execute(Request request) throws IOException;
}
This is an abstraction for working with Json data. Java doesn't have one built in so most users will be using a third party library - likely one of Gson, Jackson or org.Json. We'll provide implementations for each. The Android SDK will use the org.Json converter by default.
/**
* Convert a byte stream to and from a concrete type.
*
* @param <T> Object type.
*/
public interface Converter<T> {
/** Converts bytes to an object. */
T from(byte[] bytes) throws IOException;
/** Converts o to bytes written to the specified stream. */
void toStream(T o, OutputStream bytes) throws IOException;
}
Abstraction for queueing payloads. Clients can supply their own implementations, which may be a disk backed queue or an in-memory queue. This will conform to java.util.Queue
interface. Two default implementations will be provided - a memory queue, and a disk queue backed by Tape. Android will use the disk queue by default.
interface Queue<E> implements java.util.Queue<E> {
}
Abstraction for logging messages. Environments can load this automatically. We'll provide implementations for slf4j, java.util.logging.Logger and the Android Log (I might release a seperate module for Ln and Timber as well).
Android will load the Android Log implementation by default.
/** Abstraction for logging messages. */
public interface Log {
void debug(String format, Object... args);
void error(Throwable throwable, String format, Object... args);
/** A {@link Log} implementation which does not log anything. */
Log NONE = new Log() {
@Override public void debug(String format, Object... args){
}
@Override public void error(Throwable throwable, String format, Object... args){
}
};
}
A default options object that can be attached to every call to change params dynamically. We'll disallow setting a timestamp on this default options object, but other fields will be accepted.
The Android environment would load an Options object that disables all bundled integrations.
Same as the spec, this will be loaded dynamically by the environment to provide useful info. e.g. on Android, one would dynamically load properties about as the user's device.
The listener will be called for each call to the analytics tracking methods. On Android, the integration manager would attach itself as a listener, and proxy the payloads to bundled integrations.
This will require exposing payloads, which would be an internal class without this. But should be fine as long as we make them immutable.
interface Listener {
void onTrack(Track track);
void onScreen(Screen screen); // alias for page as well
void onFlush();
void onIdentify(Identify identify);
void onGroup(Group group);
void onAlias(Alias alias);
}
Example of how we could load Android stuff.
class AndroidAnalytics {
static Analytics singleton = null;
/**
* The global default {@link Analytics} instance.
* <p/>
* This instance is automatically initialized with defaults that are suitable to most
* implementations.
* <p/>
* If these settings do not meet the requirements of your application, you can provide properties
* in {@code analytics.xml} or you can construct your own instance with full control over the
* configuration by using {@link Builder}.
*/
public static Analytics with(android.app.Context androidContext) {
if (singleton == null) {
if (androidContext == null) {
throw new IllegalArgumentException("Context must not be null.");
}
synchronized (Analytics.class) {
if (singleton == null) {
String writeKey = getResourceString(context, WRITE_KEY_RESOURCE_IDENTIFIER);
Builder builder = new Builder(writeKey);
int maxQueueSize = getInteger(androidContext, QUEUE_SIZE_RESOURCE_IDENTIFIER, DEFAULT_QUEUE_SIZE);
builder.queue(new TapeQueue(writeKey, maxQueueSize));
boolean debugging = getBoolean(androidContext, DEBUGGING_RESOURCE_IDENTIFIER, DEFAULT_DEBUGGING); // todo: look up application flags if not defined in xml
builder.logger(new AndroidLog(debugging));
}
builder.context(loadContext(androidContext)); // fills a context dictionary with whatever we can get from the context
IntegrationManager manager = new IntegrationManager();
singleton = builder.listener(manager);
builder.options(manager.options);
builder.channel(MOBILE)
builder.converter(new JsonObjectConverter())
singleton = builder.build();
}
}
return singleton;
}
We can abstract the Environment as well
interface Environment {
abstract Channel channel();
abstract Client defaultClient();
abstract Converter defaultConverter();
abstract Queue defaultQueue();
abstract Logger defaultLogger();
abstract Options defaultOptions();
abstract Context defaultContext();
abstract Listener defaultListener();
static class Android implements Environment {
@Override Channel channel() {
return Channel.MOBILE;
}
@Override Converter defaultConverter() {
return new JsonObjectConverter();
}
...
}
}
HTTP timeouts were added in Pull Request #11, and were released in 0.4.0. This was a big improvement because prior to that, requests would block indefinitely, and eventually all segment.io traffic would stop without warning.
Now with the timeouts individual requests can fail, but the overall system does not lock up.
The problem is when the requests fail, we now get exceptions in our logs (below), and the request is not retried. This means the data in the request is lost, and segment.io will not receive the events we are trying to submit.
[segment.io failure ws2]: <11>Feb 21 15:15:17 ws2.example.com [Thread-9] analytics Failed analytics response.Read timed outjava.net.SocketTimeoutException: Read timed out
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.read(SocketInputStream.java:152)
at java.net.SocketInputStream.read(SocketInputStream.java:122)
at sun.security.ssl.InputRecord.readFully(InputRecord.java:442)
at sun.security.ssl.InputRecord.read(InputRecord.java:480)
at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:927)
at sun.security.ssl.SSLSocketImpl.readDataRecord(SSLSocketImpl.java:884)
at sun.security.ssl.AppInputStream.read(AppInputStream.java:102)
at org.apache.http.impl.io.SessionInputBufferImpl.streamRead(SessionInputBufferImpl.java:136)
at org.apache.http.impl.io.SessionInputBufferImpl.fillBuffer(SessionInputBufferImpl.java:152)
at org.apache.http.impl.io.SessionInputBufferImpl.readLine(SessionInputBufferImpl.java:270)
at org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:140)
at org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:57)
at org.apache.http.impl.io.AbstractMessageParser.parse(AbstractMessageParser.java:260)
at org.apache.http.impl.DefaultBHttpClientConnection.receiveResponseHeader(DefaultBHttpClientConnection.java:161)
at sun.reflect.GeneratedMethodAccessor67.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.http.impl.conn.CPoolProxy.invoke(CPoolProxy.java:138)
at com.sun.proxy.$Proxy57.receiveResponseHeader(Unknown Source)
at org.apache.http.protocol.HttpRequestExecutor.doReceiveResponse(HttpRequestExecutor.java:271)
at org.apache.http.protocol.HttpRequestExecutor.execute(HttpRequestExecutor.java:123)
at org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:254)
at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:195)
at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:85)
at org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:108)
at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:186)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:106)
at com.github.segmentio.request.BlockingRequester.executeRequest(BlockingRequester.java:118)
at com.github.segmentio.request.BlockingRequester.send(BlockingRequester.java:60)
at com.github.segmentio.flush.Flusher.run(Flusher.java:91)
the spec developed and deployed with the iOS SDK: https://gist.github.com/reinpk/7bd33d29694578b06cce (ignore the requestTimestamp on batch flushing since we don't want to correct timestamps coming from a server)
access is not synchronized so field must be marked volatile
While we ran into the problem mentioned in issue #12 we noticed that Analytics.flush() will get stuck indefinitely if the Flusher thread has failed. It would be nice if you could add a timeout to the call to idle.waitOne(); so that it's handles failure a bit more gracefully.
I didn't understand why Props (and its descendents, like EventProperties) do not accept BigDecimal values.
Since we have upgraded from 0.4.2 to 1.0.0, we regularly get "Queue has reached maxSize, dropping payload.", around 100 to 200 times a day for our production account (for /agorapulse/manager).
Do you think that the default queue size is not enough?
It looks already pretty big by default (10 000 in the docs).
Hi,
Is there any ETA on analytics java 2.0 release?
anonymousId
from sessionId
for clarity.integrations
object from context
, for cleaner logs.requestId
for easily tracing calls through to the raw logs.library
to be a object with name
and version
for consistency.Hello,
Your documentation suggests using the Analytics static interface class, which is ok, but generally using static variables is a bad idea as it hurts the mockability and flexibility of code.
In our project we use Client.java directly by initializing it in Spring like this:
<bean id="analytics" class="com.github.segmentio.Client" destroy-method="close">
<constructor-arg value="$[trackingkey.segmentio.secret]" />
</bean>
Then we can pass the analytics object into our beans, and when the Spring context is destroyed, our analytics object is torn down and everything is fine.
This allows us to mock out the analytics object in unit test and verify it is being called with the correct parameters, etc.
The problem I'm bringing up here is that 'Client' is not a very public-friendly API class name. Its not very specific to define a class like:
class MyClass {
/** Analytics object */
Client analytics;
.... my code here....
}
If the class were named AnalyticsClient or something like that, it would be a lot cleaner.
A minor issue, but useful.
$ gradle -v
Build time: 2016-06-14 07:16:37 UTC
Revision: cba5fea19f1e0c6a00cc904828a6ec4e11739abc
Groovy: 2.4.4
Ant: Apache Ant(TM) version 1.9.6 compiled on June 29 2015
JVM: 1.8.0_91 (Oracle Corporation 25.91-b14)
OS: Mac OS X 10.11.6 x86_64
The MessageBuilder
currently assumes that all events should be timestamped "now", and this should be overridable.
Hi,
By looking at the pom.xml
I noticed that the libraries used for testing are not scoped to test:
<dependency>
<groupId>com.squareup.burst</groupId>
<artifactId>burst-junit4</artifactId>
<version>${burst.version}</version>
</dependency>
<dependency>
<groupId>com.squareup.burst</groupId>
<artifactId>burst</artifactId>
<version>${burst.version}</version>
</dependency>
<dependency>
<groupId>org.assertj</groupId>
<artifactId>assertj-core</artifactId>
<version>${assertj.version}</version>
</dependency>
<dependency>
<groupId>org.mockito</groupId>
<artifactId>mockito-core</artifactId>
<version>${mockito.version}</version>
</dependency>
Would it be possible to add a <scope>test</scope>
so they don't get pulled in when building the jar?
Between versions 0.4.x and 1.0.0, many of the methods in the Analytics class are no longer static. For example, the track method: https://github.com/segmentio/analytics-java/blob/analytics-0.4.0/src/main/java/com/github/segmentio/Analytics.java#L335 vs. https://github.com/segmentio/analytics-java/blob/master/src/main/java/com/github/segmentio/Analytics.java#L248. The documentation at https://segment.io/docs/libraries/java/#track suggests that these methods should still be static. Is this intentional?
Hello,
I noticed that the ability of specifying a custom ThreadFactory is currently marked as @Beta
and uses "default visibility". Are there any plans of exposing this? Currently we use this functionality to mark the executor threads as "daemon" threads and to give them a custom thread name.
More specifically we are using these API's:
com.segment.analytics.Analytics.Builder.threadFactory(ThreadFactory)
com.segment.analytics.Platform.defaultThreadFactory()
...and here is the custom thread factory (so you can see our use-case):
Analytics.builder(writeKey)
.threadFactory((runnable) -> {
Thread thread = Platform.get().defaultThreadFactory().newThread(runnable);
thread.setName(String.format("SegmentClient-%d-%d", instanceCount, threadCount));
thread.setDaemon(true);
return thread;
})
.build();
Thanks,
-Brian
stick with Props
cuz even java can have some terseness sometimes :)
but actually because when we add .page
its going to get weird to say EventProperties
then
There is a pretty serious bug in 1.0.7 where dispatch causes the Flusher thread to enter a loop. I can see that Flusher
has been replace in mainline so I am hesitant to submit a patch for the current Flusher if the next release is close to being cut.
I passed the message builder a map with a null
value, and this threw an exception when the builder tried to copy it into a Guava ImmutableMap
, which doesn't permit null values. This limitation should be clearly documented if it's not practical to work around it.
Add a logging and statistics to signify that queue levels are high and messages are not being enqueued:
Hi,
Am I doing something wrong? Why this sample does not exit?
import com.segment.analytics.Analytics;
import com.segment.analytics.Log;
import com.segment.analytics.messages.IdentifyMessage;
import java.util.HashMap;
import java.util.concurrent.TimeUnit;
public class App {
public static void main(String... args) throws Exception {
final Analytics analytics =
Analytics.builder("YarSzwaAejB6EZH8dlf5RaAUD4o14Wg2").flushInterval(2000, TimeUnit.MILLISECONDS).log(new Log() {
@Override
public void print(Level level, String format, Object... args) {
System.out.println(level + "\t:" + String.format(format, args));
}
@Override
public void print(Level level, Throwable error, String format, Object... args) {
System.out.println(level + "\t:" + String.format(format, args));
System.out.println(error);
}
}).build();
new Thread() {
@Override
public void run() {
analytics.enqueue(IdentifyMessage.builder()
.userId("f4ca124298")
.traits(new HashMap<String, Object>() {{
put("name", "Michael Bolton");
put("email", "[email protected]");
}})
.build().toBuilder()
);
analytics.flush();
}
}.start();
Thread.sleep(10000);
System.out.println("Shutting down Analytics");
analytics.shutdown();
}
}
Hi,
We want to design a system that handles situation when there's no connection between client and segment api. We want to persist undelivered messages in order to send them again, say after application restart or when connection to segment api would be established again.
I'm looking at current java client and don't see any means to track whether messages were delivered, whether analytics client was able to connect to api endpoint(parsing logs is not a solution☺).
Are you guys gonna implement something like this? Is there any approach you can suggest for solving my problem with current java client?
Thank you.
I've noticed that an application had a 100% spinning thread while it should've been idle. The culprit turned out to be segmentio's Flusher thread.
I'm not familiar with the codebase but I took a stab at figuring out what went wrong.
Jstack output:
"Thread-15" #45 prio=5 os_prio=31 tid=0x00007fa8a10a7000 nid=0x7503 runnable [0x0000000125c73000]
java.lang.Thread.State: RUNNABLE
at com.github.segmentio.flush.Flusher.sendBatch(Flusher.java:110)
at com.github.segmentio.flush.Flusher.run(Flusher.java:77)
Code for sendBatch:
private void sendBatch(List<BasePayload> current) {
boolean success = true;
int retryCount = 0;
do {
try {
if (current.size() > 0) {
// we have something to send in this batch
logger.debug("Preparing to send batch.. [{} items]", current.size());
Batch batch = factory.create(current);
client.getStatistics().updateFlushAttempts(1);
success=requester.send(batch);
logger.debug("Initiated batch request .. [{} items]", current.size());
current = new LinkedList<BasePayload>();
}
} catch (RuntimeException e) {
// We will log and loop back around, so we
logger.error("Unexpected error while sending batch, catching so we don't lose records", e);
retryCount++;
success=false;
}
}
while (!success && retryCount < 3);
if (!success) {
logger.error("Unable to send batch after {} retries. Giving up on this batch.", retryCount);
}
}
It looks like the busy loop is borken - if requester.send(batch);
fails but doesn't throw then success
gets set to false, the current
list will get emptied and retryCount
won't get incremented (since nothing was thrown).
And now we basically have the following loop:
do {
if (current.size() > 0) { // always false
}
}
while (!success && retryCount < 3); // always true
this makes my CPU sad 😢
Hi,
Would it be possible to release a new version with the PageMessage-functionality?
We use maven-enforce-plugin, there's a convergence error:
Dependency convergence error for com.squareup.okio:okio:1.2.0 paths to dependency are:
+-com.myproject:my-analytics:1.0.0-SNAPSHOT
+-com.segment.analytics.java:analytics-core:2.0.0-SNAPSHOT
+-com.squareup.okhttp:okhttp:2.2.0
+-com.squareup.okio:okio:1.2.0
and
+-com.myproject:my-analytics:1.0.0-SNAPSHOT
+-com.segment.analytics.java:analytics-core:2.0.0-SNAPSHOT
+-com.squareup.okio:okio:1.3.0
[WARNING] Rule 2: org.apache.maven.plugins.enforcer.DependencyConvergence failed with message:
Failed while enforcing releasability the error(s) are [
Dependency convergence error for com.squareup.okio:okio:1.2.0 paths to dependency are:
+-com.myproject:my-analytics:1.0.0-SNAPSHOT
+-com.segment.analytics.java:analytics-core:2.0.0-SNAPSHOT
+-com.squareup.okhttp:okhttp:2.2.0
+-com.squareup.okio:okio:1.2.0
and
+-com.myproject:my-analytics:1.0.0-SNAPSHOT
+-com.segment.analytics.java:analytics-core:2.0.0-SNAPSHOT
+-com.squareup.okio:okio:1.3.0
]
To avoid this need to downgrade okio dependency to 1.2.0 under analytics parent.
Hi folks,
I have a fix for a bug present in 1.0.7, for which we are using a local patch. I would like to share this patch. It addresses a pretty serious connection pool management bug, which bit us when we upgraded some internal services to Jetty 9.3, to which the Segmentio library talks.
Are you accepting changes for this line, even though 2.x is in development? If so, which branch should they be made against?
cheers
b
com.google.common.collect.ImmutableMap
does not allow for null
keys or values.
Since analytics-java
is using this library internally, as an implementation detail, I believe it should be the responsibility of analytics-java
to strip out all null
s before calling ImmutableMap.copyOf(...)
and ImmutableMap.of(...)
.
This is because I can successfully instantiate other map-like data structures (java.util.HashMap
) with null
values.
Let's upgrade findbugs dependency to 2.0.3 coz it seems to be the latest among 2nd version.
All the callbacks (transformers, interceptors and callback) should accept the analytics client that created the message.
We are facing some issues with event tracking timestamp.
Events are mixed up in our on-boarding funnel process: KISSmetrics/Intercom.io report that end
event are sometime executed before start
event which is impossible.
We currently do not provide event timestamp since documentation says:
@param timestamp
* a {@link DateTime} object representing when the track took
* place. If the event just happened, leave it blank and we'll
* use the server's time. If you are importing data from the
* past, make sure you provide this argument.
Which server set the timestamp:
A- Our server in the Analytics lib when queueing the event?
B- Or your server when you receive the payload?
It looks like it's option B, so it might be better if we set timestamp on our side to solve the issue.
Would there be any interest in adding a Spring Boot autoconfiguration starter module? It should be almost trivial, and I'll be happy to put in a PR.
Looks like apache snapshot repository does not contain analytics 2.0.0 dependencies.
The latest public release of segment.io java library had support for Java 5.
The current release (v2) has almost no Java 7 specific code, and adding support for Java 6 should be easy and painless.
Flusher does not check the return value of requester.send
in https://github.com/segmentio/analytics-java/blob/master/src/main/java/com/github/segmentio/flush/Flusher.java#L99
I believe this led to data loss during this morning's Segment outage. Our service which sends data to Segment is configured to do 2 retries, with backoff and timeout set to 1 second each. Each time the flusher fires, I see a string of messages like this:
ERROR [2015-03-02 14:41:43,622] analytics: Failed analytics response. [error = Read timed out]
INFO [2015-03-02 14:41:44,622] analytics: Retrying request [attempt 1] ..
ERROR [2015-03-02 14:41:45,691] analytics: Failed analytics response. [error = Read timed out]
INFO [2015-03-02 14:41:46,691] analytics: Retrying request [attempt 2] ..
ERROR [2015-03-02 14:41:47,758] analytics: Failed analytics response. [error = Read timed out]
and then silence for usually a minute or more.
From my reading of the code, the RetryingRequester returns false in this case. The Flusher has already removed the message from the queue, and doesn't check the return value of send
to see if it should requeue the batch.
Is it intended behavior that messages will be lost when a batch fails after the configured number of retries? Is it possible to configure the client to never drop batches?
Error in com.segment.analytics.Analytics.Builder#flushInterval
@Beta public Builder flushInterval(long flushInterval, TimeUnit unit) {
long flushIntervalInMillis = unit.toMillis(flushInterval);
if (flushInterval < 1000) {
throw new IllegalArgumentException("flushInterval must not be less than 1 second.");
}
this.flushIntervalInMillis = flushIntervalInMillis;
return this;
}
We need to compare flushIntervalInMillis wtih 1000.
Hi,
Your JSON deserialization seems to be dependent on the names of the classes being deserialized. This causes the code to fail if it has been minimized using ProGuard. We solved it for us by excluding your classes from ProGuard but we would prefer not to do so since this increases the size of our Jar and reduces performance. We got the following exception:
Exception in thread "Thread-4" java.lang.IllegalArgumentException: class com.a.a.c.f declares multiple JSON fields named a
at com.google.gson.internal.bind.ReflectiveTypeAdapterFactory.getBoundFields(S:122)
at com.google.gson.internal.bind.ReflectiveTypeAdapterFactory.create(S:72)
at com.google.gson.Gson.getAdapter(S:353)
at com.google.gson.internal.bind.m.write(S:55)
at com.google.gson.internal.bind.b.a(S:96)
at com.google.gson.internal.bind.b.write(S:60)
at com.google.gson.internal.bind.m.write(S:68)
at com.google.gson.internal.bind.i.a(S:89)
at com.google.gson.internal.bind.ReflectiveTypeAdapterFactory$Adapter.write(S:195)
at com.google.gson.Gson.toJson(S:586)
at com.google.gson.Gson.toJson(S:565)
at com.google.gson.Gson.toJson(S:520)
at com.google.gson.Gson.toJson(S:500)
at com.a.a.d.a.a(S:58)
at com.a.a.a.a.run(S:91)
The new retry mechanism is great, but if the RetryingRequestor
fails to send a batch in the number of retries (currently 2) the batch is dropped (silently, see #18 for a logging enhancement for that)
I would like to propose modifying the Flusher to optionally preserve the current List instead of always resetting it after send()
. This would allow the Flusher to collect more objects from the queue (up to BATCH_INCREMENT
) and then try again.
I would argue the Flusher should try again with the current batch until successful. If the queue overflows that is a separate issue, but it seems like a bad idea to drop events arbitrarily because the send()
failed twice.
I would like to be able to mock the Analytics class so that I can create unit tests.
Is possible for you to create one or can you recommend an alternate way of testing the Analytics class?
Best wishes,
Thorey
Does Context
support a providers property?
It looks like SafeProperties
only support String
, Integer
, Double
, Boolean
, or Date
.
So an HashMap
of 'providers' is ignored.
How can I call the identify method only for Intercom.io (when other providers are already configured on Segment.io such as KISSmetrics and Customer.io)?
Thanks.
Hi,
Sorry for asking this question here, but there's no response from segment support on the website.
The question is: why is there READ KEY available in segment dashboard? What API methods are available for reading data?
Since people use Java server-side it should support instantiating multiple clients/projects like our other server-side sdks node and ruby.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.