Git Product home page Git Product logo

cosine-lsh-join-spark's Introduction

Cosine LSH Join Spark

A spark library for approximate nearest neighbours (ANN).

Background

In many computational problems such as NLP, Recommendation Systems and Search, items (e.g. words) are represented as vectors in a multidimensional space. Then given a specific item it's nearest neighbours need to be find e.g. given a query find the most similar ones. A naive liner scan over the data set might be too slow for most data sets.

Hence, more efficient algorithms are needed. One of the most widely used approaches is Locality Sensitive Hashing (LSH). This family of algorithms are very fast but might not give the exact solution and are hence called approximate nearest neighbours (ANN). The trade off between accuracy and speed is generally set via parameters of the algorithm.

Joiner Interface

This is an interface to find the k nearest neighbors from a data set for every other object in the same data set. Implementations may be either exact or approximate.

trait Joiner {
    def join(matrix: IndexedRowMatrix): CoordinateMatrix
}

matrix is a row oriented matrix. Each row in the matrix represents an item in the dataset. Items are identified by their matrix index. Returns a similarity matrix with MatrixEntry(itemA, itemB, similarity).

Example

// item_a --> (1.0, 1.0, 1.0)
// item_b --> (2.0, 2.0, 2.0)
// item_c --> (6.0, 3.0, 2.0)

val rows = Seq(
  IndexedRow(1, Vectors.dense(1.0, 1.0, 1.0)),
  IndexedRow(2, Vectors.dense(2.0, 2.0, 2.0)),
  IndexedRow(5, Vectors.dense(6.0, 3.0, 2.0))
)
val matrix = new IndexedRowMatrix(sc.parallelize(rows))
val similariyMatrix = joiner.join(matrix)

val results = similariyMatrix.entries.map {
      entry =>
        "item:%d item:%d cosine:%.2f".format(entry.i, entry.j, entry.value)
    }

results.foreach(println)

// above will print:
// item:2 item:3 cosine:0,87
// item:1 item:3 cosine:0,87
// item:1 item:2 cosine:1,00

Please see included Main.scala file for a more detailed example.

Implementations of the joiner interface

LSH

This is an implementation of the following paper for Spark:

Randomized Algorithms and NLP: Using Locality Sensitive Hash Function for High Speed Noun Clustering

-- Ravichandran et al.

The algorithm determines a set of candidate items in the first stage and only computes the exact cosine similarity for those candidates. It has been succesfully used in production with typical run times of a couple of minutes for millions of items.

Note that candidates are ranked by their exact cosine similarity. Hence, this algorithm will not return any false positives (items that the system thinks are nearby but are actually not). Most real world applications require this e.g. in recommendation systems it is ok to return similar items which are almost as good as the exact nearest neighbours but showing false positives would result in senseless recommendations.

val lsh = new Lsh(
  minCosineSimilarity = 0.5,
  dimensions = 2,
  numNeighbours = 3,
  numPermutations = 1,
  partitions = 1,
  storageLevel = StorageLevel.MEMORY_ONLY
)

Please see the original publication for a detailed description of the parameters.

NearestNeighbours

Brute force method to compute exact nearest neighbours. As this is a very expensive computation O(n^2) an additional sample parameter may be passed such that neighbours are just computed for a random fraction. This interface may be used to tune parameters for approximate solutions on a small subset of data.

QueryJoiner Interface

An interface to find the nearest neighbours in a catalog matrix for each entry in a query matrix. Implementations may be either exact or approximate.

trait QueryJoiner {
  def join(queryMatrix: IndexedRowMatrix, catalogMatrix: IndexedRowMatrix): CoordinateMatrix
}

Implementations of the QueryJoiner Interface

QueryLsh

Standard Lsh implementation. A query matrix is hashed multiple times and exact hash matches are searched for in a catalog Matrix. These candidates are used to compute the exact cosine distance.

QueryHamming

Implementation based on approximated cosine distances. The cosine distances are approximated using hamming distances which are way faster to compute. The catalog matrix is broadcasted. This implementation is therefore suited for tasks where the catalog matrix is very small compared to the query matrix.

QueryNearestNeighbours

Brute force O(size(query) * size(catalog)) method to compute exact nearest neighbours for rows in the query matrix. As this is a very expensive computation additional sample parameters may be passed such that neighbours are just computed for a random fraction of the data set. This interface may be used to tune parameters for approximate solutions on a small subset of data.

Maven

The artifacts are hosted on Maven Central. For Spark 1.x add the following line to your build.sbt file:

libraryDependencies += "com.soundcloud" % "cosine-lsh-join-spark_2.10" % "0.0.5"

For Spark 2.x use:

libraryDependencies += "com.soundcloud" % "cosine-lsh-join-spark_2.10" % "1.0.1"

or if you're on scala 2.11.x use:

libraryDependencies += "com.soundcloud" % "cosine-lsh-join-spark_2.11" % "1.0.1"

Releasing (maintainers only)

In order to release the library using the release plugin sbt release, you need to set up the following:

Contributors

Özgür Demir

Rany Keddo

Alexey Rodriguez Yakushev

Aaron Levin

cosine-lsh-join-spark's People

Contributors

alexeyrodriguez avatar chrisberkhout avatar iluu avatar joshdevins avatar karlhigley avatar madjar avatar ozgurdemir avatar purzelrakete avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

cosine-lsh-join-spark's Issues

How can I use sparse matrix instead dense matrix?

I have a matrix with a dimension of 2M x 30K. It contains binary data and is highly sparse.
So I want to use LSH approach to find similar rows.

On the main.scala there is a part where a dense matrix is used. e.g.

val rows = indexed.map {
case ((word, features), index) =>
IndexedRow(index, Vectors.dense(features))
}

How could I implement it with a sparse matrix?

Thanks in advance.

java.lang.ClassCastException: cannot assign instance of scala.concurrent.duration.FiniteDuration to field org.apache.spark.rpc.RpcTimeout.duration of type scala.concurrent.duration.FiniteDuration in instance of org.apache.spark.rpc.RpcTimeout

Hi

My environment spark 2.3.0 with spark-mllib_2.11:2.3.0
When I include com.soundcloud:cosine-lsh-join-spark_2.11:1.0.4 in my shaowJar , it will cause the following exception.

[2018-06-22 08:20:37,962][ERROR] TransportRequestHandler : Error while invoking RpcHandler#receive() for one-way message.
java.lang.ClassCastException: cannot assign instance of scala.concurrent.duration.FiniteDuration to field org.apache.spark.rpc.RpcTimeout.duration of type scala.concurrent.duration.FiniteDuration in instance of org.apache.spark.rpc.RpcTimeout
at java.io.ObjectStreamClass$FieldReflector.setObjFieldValues(ObjectStreamClass.java:2233)
at java.io.ObjectStreamClass.setObjFieldValues(ObjectStreamClass.java:1405)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2284)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2202)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2060)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1567)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2278)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2202)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2060)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1567)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2278)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2202)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2060)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1567)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:427)
at org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:75)
at org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:108)
at org.apache.spark.rpc.netty.NettyRpcEnv$$anonfun$deserialize$1$$anonfun$apply$1.apply(NettyRpcEnv.scala:271)
at scala.util.DynamicVariable.withValue(DynamicVariable.scala:58)
at org.apache.spark.rpc.netty.NettyRpcEnv.deserialize(NettyRpcEnv.scala:320)
at org.apache.spark.rpc.netty.NettyRpcEnv$$anonfun$deserialize$1.apply(NettyRpcEnv.scala:270)
at scala.util.DynamicVariable.withValue(DynamicVariable.scala:58)
at org.apache.spark.rpc.netty.NettyRpcEnv.deserialize(NettyRpcEnv.scala:269)
at org.apache.spark.rpc.netty.RequestMessage$.apply(NettyRpcEnv.scala:611)
at org.apache.spark.rpc.netty.NettyRpcHandler.internalReceive(NettyRpcEnv.scala:662)
at org.apache.spark.rpc.netty.NettyRpcHandler.receive(NettyRpcEnv.scala:654)
at org.apache.spark.network.server.TransportRequestHandler.processOneWayMessage(TransportRequestHandler.java:208)
at org.apache.spark.network.server.TransportRequestHandler.handle(TransportRequestHandler.java:113)
at org.apache.spark.network.server.TransportChannelHandler.channelRead(TransportChannelHandler.java:118)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
at org.apache.spark.network.util.TransportFrameDecoder.channelRead(TransportFrameDecoder.java:85)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1359)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:935)
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:138)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:645)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:580)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:497)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:459)
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858)
at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138)
at java.lang.Thread.run(Thread.java:748)

saveAsTextFile does not work

hi there,
I tried to save result by using the following command in the end of Main.scala:

result.saveAsTextFile(output)

It worked on my laptop, but was stuck on my cluster. Can you help me with that?

Thanks a lot.
rubing

Which "vector" to used for calculating similarity

In the construction of signatures in matrixToBitSet, the vector given here is the one from the multiplication of inputMatrix and localRandomMatrix and this vector will be used to calculate the similarity of words, which implies that cos(θ(u, v)) = cos(θ(u * r, v * r)) (Here r is the random matrix). According to the original paper, it's suggested that cos(θ(u, v)) = cos((1 − Pr[hr(u) = hr(v)])π) = cos(hamming_distance / d * π) and I think it doesn't mean cos(hamming_distance / d * π) = cos(θ(u * r, v * r)) ? (Correct me if I'm wrong)

So I think we either

  1. change indexedRow.vector in matrixToBitSet to be the corresponding row in inputMatrix; or
  2. change Cosine(x.vector, y.vector) in neighbours() to be hamming(x.bitSet, y.bitSet)

StackOverflowError with Spark 2.1.0

Hi,
I know that the library is tested with Spark 2.0.1 but I decided to give Spark 2.1.0 a try and I wanted to report my experience.

My scenario is quite simple: I have a Dataframe, from which I generate an IndexedRowMatrix.

Everything works fine if the Dataframe consists of a small number of features, while there is an issue with a bigger number of features (about 200).

With Spark 2.1.0 (local mode), the job (started from Spark submit) gets aborted due to stage failure.

The crucial point seems to be this:
...
org.apache.spark.mllib.linalg.distributed.IndexedRowMatrix.numCols(IndexedRowMatrix.scala:57)
at com.soundcloud.lsh.Lsh.join(Lsh.scala:33)
at clustering.ClusteringCmd$$anonfun$28.apply(ClusteringCmd.scala:533)

Row 533 of ClusteringCmd.scala class is:
val similarityMatrix: CoordinateMatrix = lsh.join(matrix)

Row 33 of Lsh.scala class is:
val numFeatures = inputMatrix.numCols().toInt

Things work fine getting back to Spark 2.0.1 (or querying a Dataframe with a smaller feature set).

I report the full stack, hoping that this could be useful for future development.

Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1435)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1423)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1422)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1422)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:802)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:802)
at scala.Option.foreach(Option.scala:257)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:802)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1650)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1605)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1594)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:628)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1918)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1931)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1944)
at org.apache.spark.rdd.RDD$$anonfun$take$1.apply(RDD.scala:1353)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:362)
at org.apache.spark.rdd.RDD.take(RDD.scala:1326)
at org.apache.spark.rdd.RDD$$anonfun$first$1.apply(RDD.scala:1367)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:362)
at org.apache.spark.rdd.RDD.first(RDD.scala:1366)
at org.apache.spark.mllib.linalg.distributed.IndexedRowMatrix.numCols(IndexedRowMatrix.scala:57)
at com.soundcloud.lsh.Lsh.join(Lsh.scala:33)
at clustering.ClusteringCmd$$anonfun$28.apply(ClusteringCmd.scala:533)
at clustering.ClusteringCmd$$anonfun$28.apply(ClusteringCmd.scala:527)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186)
at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
at scala.collection.mutable.ArrayOps$ofRef.map(ArrayOps.scala:186)
at clustering.ClusteringCmd$.clusteringParallelEnsemble(ClusteringCmd.scala:527)
at clustering.ClusteringCmd$$anonfun$execEnsemble$1.apply(ClusteringCmd.scala:225)
at clustering.ClusteringCmd$$anonfun$execEnsemble$1.apply(ClusteringCmd.scala:212)
at scala.collection.immutable.List.foreach(List.scala:381)
at clustering.ClusteringCmd$.execEnsemble(ClusteringCmd.scala:212)
at MainParams$.main(MainParams.scala:89)
at MainParams.main(MainParams.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:738)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:187)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:212)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:126)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: org.spark_project.guava.util.concurrent.ExecutionError: java.lang.StackOverflowError
at org.spark_project.guava.cache.LocalCache$Segment.get(LocalCache.java:2261)
at org.spark_project.guava.cache.LocalCache.get(LocalCache.java:4000)
at org.spark_project.guava.cache.LocalCache.getOrLoad(LocalCache.java:4004)
at org.spark_project.guava.cache.LocalCache$LocalLoadingCache.get(LocalCache.java:4874)
at org.apache.spark.sql.catalyst.expressions.codegen.CodeGenerator$.compile(CodeGenerator.scala:890)
at org.apache.spark.sql.catalyst.expressions.codegen.GenerateOrdering$.create(GenerateOrdering.scala:155)
at org.apache.spark.sql.catalyst.expressions.codegen.GenerateOrdering$.create(GenerateOrdering.scala:43)
at org.apache.spark.sql.catalyst.expressions.codegen.CodeGenerator.generate(CodeGenerator.scala:874)
at org.apache.spark.sql.catalyst.expressions.codegen.CodeGenerator.generate(CodeGenerator.scala:871)
at org.apache.spark.sql.execution.SparkPlan.newOrdering(SparkPlan.scala:363)
at org.apache.spark.sql.execution.SortExec.createSorter(SortExec.scala:63)
at org.apache.spark.sql.execution.SortExec$$anonfun$1.apply(SortExec.scala:102)
at org.apache.spark.sql.execution.SortExec$$anonfun$1.apply(SortExec.scala:101)
at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$25.apply(RDD.scala:826)
at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$25.apply(RDD.scala:826)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
at org.apache.spark.scheduler.Task.run(Task.scala:99)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:282)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.StackOverflowError
at org.codehaus.janino.Descriptor.size(Descriptor.java:86)
at org.codehaus.janino.CodeContext.determineArgumentsSize(CodeContext.java:785)
at org.codehaus.janino.CodeContext.flowAnalysis(CodeContext.java:474)
at org.codehaus.janino.CodeContext.flowAnalysis(CodeContext.java:541)
... (the last line is repeated many times)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.