Git Product home page Git Product logo

databrickslabs / automl-toolkit Goto Github PK

View Code? Open in Web Editor NEW
189.0 189.0 42.0 161.88 MB

Toolkit for Apache Spark ML for Feature clean-up, feature Importance calculation suite, Information Gain selection, Distributed SMOTE, Model selection and training, Hyper parameter optimization and selection, Model interprability.

License: Other

Scala 30.64% Dockerfile 0.01% Python 0.38% HTML 68.97% Shell 0.01%
apache-spark feature-engineering machinelearning ml pyspark scala spark

automl-toolkit's People

Contributors

bali0019 avatar benwilson2 avatar geeksheikh avatar marygracemoesta avatar nathanknox avatar nsenno-dbr avatar wesley84 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

automl-toolkit's Issues

java.lang.ArrayIndexOutOfBoundsException when execute `FamilyRunner`

Hi
I got an Error : java.lang.ArrayIndexOutOfBoundsException: 1 when i execute the FamilyRunner
or AutomationRunner
I used the practice example in README.md.
How can i solve this problem?

val data = spark.table("DF")

val overrides = Map("labelCol" -> "class")

val randomForestConfig = ConfigurationGenerator.generateConfigFromMap("RandomForest", "classifier", overrides)
val gbtConfig = ConfigurationGenerator.generateConfigFromMap("GBT", "classifier", overrides)
val logConfig = ConfigurationGenerator.generateConfigFromMap("LogisticRegression", "classifier", overrides)


val runner = FamilyRunner(data, Array(randomForestConfig, gbtConfig, logConfig))..execute()

And Error code is below :


ava.lang.ArrayIndexOutOfBoundsException: 1
	at com.databricks.labs.automl.utils.WorkspaceDirectoryValidation.validate(WorkspaceDirectoryValidation.scala:96)
	at com.databricks.labs.automl.utils.WorkspaceDirectoryValidation$.apply(WorkspaceDirectoryValidation.scala:123)
	at com.databricks.labs.automl.executor.DataPrep.prepData(DataPrep.scala:275)
	at com.databricks.labs.automl.executor.FamilyRunner$$anonfun$execute$1.apply(FamilyRunner.scala:125)
	at com.databricks.labs.automl.executor.FamilyRunner$$anonfun$execute$1.apply(FamilyRunner.scala:119)
	at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
	at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186)
	at com.databricks.labs.automl.executor.FamilyRunner.execute(FamilyRunner.scala:119)
	at line3fa913e91f964622bbee0641bf7664fb138.$read$$iw$$iw$$iw$$iw$$iw$$iw.<init>(command-983:1)
	at line3fa913e91f964622bbee0641bf7664fb138.$read$$iw$$iw$$iw$$iw$$iw.<init>(command-983:48)
	at line3fa913e91f964622bbee0641bf7664fb138.$read$$iw$$iw$$iw$$iw.<init>(command-983:50)
	at line3fa913e91f964622bbee0641bf7664fb138.$read$$iw$$iw$$iw.<init>(command-983:52)
	at line3fa913e91f964622bbee0641bf7664fb138.$read$$iw$$iw.<init>(command-983:54)
	at line3fa913e91f964622bbee0641bf7664fb138.$read$$iw.<init>(command-983:56)
	at line3fa913e91f964622bbee0641bf7664fb138.$read.<init>(command-983:58)
	at line3fa913e91f964622bbee0641bf7664fb138.$read$.<init>(command-983:62)
	at line3fa913e91f964622bbee0641bf7664fb138.$read$.<clinit>(command-983)
	at line3fa913e91f964622bbee0641bf7664fb138.$eval$.$print$lzycompute(<notebook>:7)
	at line3fa913e91f964622bbee0641bf7664fb138.$eval$.$print(<notebook>:6)
	at line3fa913e91f964622bbee0641bf7664fb138.$eval.$print(<notebook>)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at scala.tools.nsc.interpreter.IMain$ReadEvalPrint.call(IMain.scala:793)
	at scala.tools.nsc.interpreter.IMain$Request.loadAndRun(IMain.scala:1054)
	at scala.tools.nsc.interpreter.IMain$WrappedRequest$$anonfun$loadAndRunReq$1.apply(IMain.scala:645)
	at scala.tools.nsc.interpreter.IMain$WrappedRequest$$anonfun$loadAndRunReq$1.apply(IMain.scala:644)
	at scala.reflect.internal.util.ScalaClassLoader$class.asContext(ScalaClassLoader.scala:31)
	at scala.reflect.internal.util.AbstractFileClassLoader.asContext(AbstractFileClassLoader.scala:19)
	at scala.tools.nsc.interpreter.IMain$WrappedRequest.loadAndRunReq(IMain.scala:644)
	at scala.tools.nsc.interpreter.IMain.interpret(IMain.scala:576)
	at scala.tools.nsc.interpreter.IMain.interpret(IMain.scala:572)
	at com.databricks.backend.daemon.driver.DriverILoop.execute(DriverILoop.scala:215)
	at com.databricks.backend.daemon.driver.ScalaDriverLocal$$anonfun$repl$1.apply$mcV$sp(ScalaDriverLocal.scala:197)
	at com.databricks.backend.daemon.driver.ScalaDriverLocal$$anonfun$repl$1.apply(ScalaDriverLocal.scala:197)
	at com.databricks.backend.daemon.driver.ScalaDriverLocal$$anonfun$repl$1.apply(ScalaDriverLocal.scala:197)
	at com.databricks.backend.daemon.driver.DriverLocal$TrapExitInternal$.trapExit(DriverLocal.scala:679)
	at com.databricks.backend.daemon.driver.DriverLocal$TrapExit$.apply(DriverLocal.scala:632)
	at com.databricks.backend.daemon.driver.ScalaDriverLocal.repl(ScalaDriverLocal.scala:197)
	at com.databricks.backend.daemon.driver.DriverLocal$$anonfun$execute$8.apply(DriverLocal.scala:368)
	at com.databricks.backend.daemon.driver.DriverLocal$$anonfun$execute$8.apply(DriverLocal.scala:345)
	at com.databricks.logging.UsageLogging$$anonfun$withAttributionContext$1.apply(UsageLogging.scala:238)
	at scala.util.DynamicVariable.withValue(DynamicVariable.scala:58)
	at com.databricks.logging.UsageLogging$class.withAttributionContext(UsageLogging.scala:233)
	at com.databricks.backend.daemon.driver.DriverLocal.withAttributionContext(DriverLocal.scala:48)
	at com.databricks.logging.UsageLogging$class.withAttributionTags(UsageLogging.scala:271)
	at com.databricks.backend.daemon.driver.DriverLocal.withAttributionTags(DriverLocal.scala:48)
	at com.databricks.backend.daemon.driver.DriverLocal.execute(DriverLocal.scala:345)
	at com.databricks.backend.daemon.driver.DriverWrapper$$anonfun$tryExecutingCommand$2.apply(DriverWrapper.scala:644)
	at com.databricks.backend.daemon.driver.DriverWrapper$$anonfun$tryExecutingCommand$2.apply(DriverWrapper.scala:644)
	at scala.util.Try$.apply(Try.scala:192)
	at com.databricks.backend.daemon.driver.DriverWrapper.tryExecutingCommand(DriverWrapper.scala:639)
	at com.databricks.backend.daemon.driver.DriverWrapper.getCommandOutputAndError(DriverWrapper.scala:485)
	at com.databricks.backend.daemon.driver.DriverWrapper.executeCommand(DriverWrapper.scala:597)
	at com.databricks.backend.daemon.driver.DriverWrapper.runInnerLoop(DriverWrapper.scala:390)
	at com.databricks.backend.daemon.driver.DriverWrapper.runInner(DriverWrapper.scala:337)
	at com.databricks.backend.daemon.driver.DriverWrapper.run(DriverWrapper.scala:219)
	at java.lang.Thread.run(Thread.java:748)

java.lang.NoSuchMethodError: org.mlflow.api.proto.Service$CreateRun$Builder.setRunName

Hi, After complete the FamilyRunner, I got an error code with belw

java.lang.NoSuchMethodError: org.mlflow.api.proto.Service$CreateRun$Builder.setRunName(Ljava/lang/String;)Lorg/mlflow/api/proto/Service$CreateRun$Builder;

at com.databricks.labs.automl.tracking.MLFlowTracker.com$databricks$labs$automl$tracking$MLFlowTracker$$generateMlFlowRun(MLFlowTracker.scala:148)
	at com.databricks.labs.automl.tracking.MLFlowTracker.logBest(MLFlowTracker.scala:401)
	at com.databricks.labs.automl.tracking.MLFlowTracker.logMlFlowDataAndModels(MLFlowTracker.scala:352)
	at com.databricks.labs.automl.AutomationRunner.logResultsToMlFlow(AutomationRunner.scala:1291)
	at com.databricks.labs.automl.AutomationRunner.liftedTree1$1(AutomationRunner.scala:1439)
	at com.databricks.labs.automl.AutomationRunner.executeTuning(AutomationRunner.scala:1438)
	at com.databricks.labs.automl.executor.FamilyRunner$$anonfun$execute$1.apply(FamilyRunner.scala:129)
	at com.databricks.labs.automl.executor.FamilyRunner$$anonfun$execute$1.apply(FamilyRunner.scala:119)
	at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
	at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186)
	at com.databricks.labs.automl.executor.FamilyRunner.execute(FamilyRunner.scala:119)
	at linea339e92b41aa489e83cc214c9c04f05540.$read$$iw$$iw$$iw$$iw$$iw$$iw.<init>(command-1020:5)
	at linea339e92b41aa489e83cc214c9c04f05540.$read$$iw$$iw$$iw$$iw$$iw.<init>(command-1020:53)
	at linea339e92b41aa489e83cc214c9c04f05540.$read$$iw$$iw$$iw$$iw.<init>(command-1020:55)
	at linea339e92b41aa489e83cc214c9c04f05540.$read$$iw$$iw$$iw.<init>(command-1020:57)
	at linea339e92b41aa489e83cc214c9c04f05540.$read$$iw$$iw.<init>(command-1020:59)
	at linea339e92b41aa489e83cc214c9c04f05540.$read$$iw.<init>(command-1020:61)
	at linea339e92b41aa489e83cc214c9c04f05540.$read.<init>(command-1020:63)
	at linea339e92b41aa489e83cc214c9c04f05540.$read$.<init>(command-1020:67)
	at linea339e92b41aa489e83cc214c9c04f05540.$read$.<clinit>(command-1020)
	at linea339e92b41aa489e83cc214c9c04f05540.$eval$.$print$lzycompute(<notebook>:7)
	at linea339e92b41aa489e83cc214c9c04f05540.$eval$.$print(<notebook>:6)
	at linea339e92b41aa489e83cc214c9c04f05540.$eval.$print(<notebook>)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at scala.tools.nsc.interpreter.IMain$ReadEvalPrint.call(IMain.scala:793)
	at scala.tools.nsc.interpreter.IMain$Request.loadAndRun(IMain.scala:1054)
	at scala.tools.nsc.interpreter.IMain$WrappedRequest$$anonfun$loadAndRunReq$1.apply(IMain.scala:645)
	at scala.tools.nsc.interpreter.IMain$WrappedRequest$$anonfun$loadAndRunReq$1.apply(IMain.scala:644)
	at scala.reflect.internal.util.ScalaClassLoader$class.asContext(ScalaClassLoader.scala:31)
	at scala.reflect.internal.util.AbstractFileClassLoader.asContext(AbstractFileClassLoader.scala:19)
	at scala.tools.nsc.interpreter.IMain$WrappedRequest.loadAndRunReq(IMain.scala:644)
	at scala.tools.nsc.interpreter.IMain.interpret(IMain.scala:576)
	at scala.tools.nsc.interpreter.IMain.interpret(IMain.scala:572)
	at com.databricks.backend.daemon.driver.DriverILoop.execute(DriverILoop.scala:215)
	at com.databricks.backend.daemon.driver.ScalaDriverLocal$$anonfun$repl$1.apply$mcV$sp(ScalaDriverLocal.scala:197)
	at com.databricks.backend.daemon.driver.ScalaDriverLocal$$anonfun$repl$1.apply(ScalaDriverLocal.scala:197)
	at com.databricks.backend.daemon.driver.ScalaDriverLocal$$anonfun$repl$1.apply(ScalaDriverLocal.scala:197)
	at com.databricks.backend.daemon.driver.DriverLocal$TrapExitInternal$.trapExit(DriverLocal.scala:679)
	at com.databricks.backend.daemon.driver.DriverLocal$TrapExit$.apply(DriverLocal.scala:632)
	at com.databricks.backend.daemon.driver.ScalaDriverLocal.repl(ScalaDriverLocal.scala:197)
	at com.databricks.backend.daemon.driver.DriverLocal$$anonfun$execute$8.apply(DriverLocal.scala:368)
	at com.databricks.backend.daemon.driver.DriverLocal$$anonfun$execute$8.apply(DriverLocal.scala:345)
	at com.databricks.logging.UsageLogging$$anonfun$withAttributionContext$1.apply(UsageLogging.scala:238)
	at scala.util.DynamicVariable.withValue(DynamicVariable.scala:58)
	at com.databricks.logging.UsageLogging$class.withAttributionContext(UsageLogging.scala:233)
	at com.databricks.backend.daemon.driver.DriverLocal.withAttributionContext(DriverLocal.scala:48)
	at com.databricks.logging.UsageLogging$class.withAttributionTags(UsageLogging.scala:271)
	at com.databricks.backend.daemon.driver.DriverLocal.withAttributionTags(DriverLocal.scala:48)
	at com.databricks.backend.daemon.driver.DriverLocal.execute(DriverLocal.scala:345)
	at com.databricks.backend.daemon.driver.DriverWrapper$$anonfun$tryExecutingCommand$2.apply(DriverWrapper.scala:644)
	at com.databricks.backend.daemon.driver.DriverWrapper$$anonfun$tryExecutingCommand$2.apply(DriverWrapper.scala:644)
	at scala.util.Try$.apply(Try.scala:192)
	at com.databricks.backend.daemon.driver.DriverWrapper.tryExecutingCommand(DriverWrapper.scala:639)
	at com.databricks.backend.daemon.driver.DriverWrapper.getCommandOutputAndError(DriverWrapper.scala:485)
	at com.databricks.backend.daemon.driver.DriverWrapper.executeCommand(DriverWrapper.scala:597)
	at com.databricks.backend.daemon.driver.DriverWrapper.runInnerLoop(DriverWrapper.scala:390)
	at com.databricks.backend.daemon.driver.DriverWrapper.runInner(DriverWrapper.scala:337)
	at com.databricks.backend.daemon.driver.DriverWrapper.run(DriverWrapper.scala:219)
	at java.lang.Thread.run(Thread.java:748)

And my code is

import com.databricks.labs.automl.executor.config.ConfigurationGenerator
import com.databricks.labs.automl.executor.FamilyRunner

val sourceData = spark.read("<DATA>")
val overrides = Map("labelCol" -> "is_attributed",
"mlFlowExperimentName" -> "<User-Defined-Name>",
"mlFlowTrackingURI" -> "<Databricks Host URI>",
"mlFlowAPIToken" -> dbutils.notebook.getContext().apiToken.get,
"mlFlowModelSaveDirectory" -> "<User-Defined-Directory>",
"inferenceConfigSaveLocation" -> "<User-Defined-Directory>",
"tunerParallelism" -> 30
)
val randomForestConfig = ConfigurationGenerator.generateConfigFromMap("RandomForest", "classifier", overrides)
val gbtConfig = ConfigurationGenerator.generateConfigFromMap("GBT", "classifier", overrides)
val logConfig = ConfigurationGenerator.generateConfigFromMap("LogisticRegression", "classifier", overrides)

val runner = FamilyRunner(sourceData, Array(logConfig)).execute()

Additionaly I installed library on my cluster, items are in below :

automatedml_2_11_0_5_1.jar
JAR
Installed
dbfs:/FileStore/jars/0391c7b8_92d3_4a41_92e4_1456ab5d4d54-automatedml_2_11_0_5_1-3990a.jar

azureml
PyPI
Uninstall pending restart

Hyperopt
PyPI
Installed

keras
PyPI
Installed

koalas
PyPI
Installed

ml.combust.mleap:mleap-spark_2.11:0.14.0
Maven
Installed

mleap
PyPI
Installed

mlflow
PyPI
Installed

org.mlflow:mlflow-client:1.2.0
Maven
Installed

org.mlflow:mlflow-scoring:1.2.0
Maven
Installed

seaborn
PyPI
Installed

sklearn
PyPI
Installed

xgboost
PyPI
Installed

xgboost4j_spark_0_90.jar
JAR
Installed
dbfs:/FileStore/jars/2afc2977_6cc0_4511_8b70_555882caa8af-xgboost4j_spark_0_90-b50ca.jar

Will Python APIs be provided?

It seems that the current version only supports scala APIs to build automation flow. Will Python APIs be provided?

'FeatureImportance' object has no attribute 'run_feature_importances'

I am trying to run the python example on databricks.
When I get to this line I get the error in the subject.
fi_importances = FI.run_feature_importances("XGBoost", "classifier", dataframe,20.0,"count",generic_overrides)

I have attached the wheel file pyAutoML-0.2.0-py3-none-any.whl to my cluster.

How to install AutoML-Toolkit for python in databricks?

This module it very promising.
I want to use it but I dont know how can I get .whl file.

As in python installation document,
Currently, this library exists as a .whl file in the /dist directory.

Where can find the .whl file and add in Databricks?

Many thanks in advance.

Issue with DataSplitUtility repartition(0)

When following this tutorial, I encounter the following error during feature selection thrown by DataSplitUtility:
java.lang.IllegalArgumentException: requirement failed: Number of partitions (0) must be positive.

The thing I do differently from the tutorial is setting the trainTestSplitMethod to "chronological" as in:

Map(
  ...
  "tunerTrainSplitMethod" -> "chronological",
  "tunerTrainSplitChronologicalColumn" -> "id",
  "tunerTrainSplitChronologicalRandomPercentage" -> 0.25,
  ...
)

Any ideas on how to fix the issue?

I am using:

  • Spark 3.2.0
  • Hadoop 3.3.1
  • Scala 2.12.15
  • automl-toolkit 0.8.1

Issue with DropColumnsTransformer when split is "chronological"

Since yesterday, I tried using FamilyRunner and it works past DropColumnsTransformer stage as long as I don't use "chronological" split method -- but fails in DataSplitUtility.split as reported here
The error I get with FamilyRunner is different from the above. In my understanding, DropColumnsTransformer drops tunerTrainSplitChronologicalColumn despite the fact that I add it to fieldsToIgnoreInVector.

In my understanding, columns in fieldsToIgnoreInVector should be left untouched by all transformers, but it doesn't seem to be the case. It is possible to spot the problem with the debug flag. In my experiment, tunerTrainSplitChronologicalColumn -> "id_col", but it is not present in the step output dataset:

...
Output dataset schema: root
 |-- label_col: integer (nullable = true)
 |-- automl_internal_id: long (nullable = false)
 |-- features: vector (nullable = true)

=== End of class com.databricks.labs.automl.pipeline.DropColumnsTransformer Pipeline Stage log <==

I will look deeper into this and open a PR to fix it.

feature interaction: evaluation scoring on original input fields was too slow.

hey guys, i'm reading the source code and i would like to sincerely thank all those works you've done there, and public all of code too. But i noticed that some of code in "FeatureInteraction" is running too slow, for example:

`val nominalScores = nominalFields.map { x =>
x -> ColumnScoreData(
scoreColumn(
df,
modelType,
x,
getFieldType("nominal"),
totalRecordCount
),
"nominal"
)

}.toMap

val continuousScores = continuousFields.map { x =>
  x -> ColumnScoreData(
    scoreColumn(
      df,
      modelType,
      x,
      getFieldType("continuous"),
      totalRecordCount
    ),
    "continuous"
  )
}.toMap`

is there any suggestions for paralisim? looking forward your reply!

How to use in databricks?

Hello, I followed the instructions in this repo and was able to build using SBT. I installed it in my cluster using the GUI, but still, have an error when importing. If you could provide guidance, that would be great.

Thanks!

License

Hi,
What is the plan with the license for this project? Will it become Apache 2.0 like DeltaLake?

Also will there be a Spark 3.0/Scala 2.12 release?

Thanks

Fix example notebook so it works out of the box

Current example does not work.
https://github.com/databrickslabs/automl-toolkit/blob/master/demos/AutoMLPresentationDemo.dbc

Issues:

  • Load data
  • Parameterize hard-coded path names
    • Experiment name is hardcoded
      • Cmd 15 has experiment name hardcoded to /Users/[email protected]/autoMLTraining
      • Use dbutils.notebook.getContext().tags("user") to parameterize user home dir
    • Cmd 16 - dbfs:/tmp/tomes/ml/automl/models/$projectName/
    • Cmd 28 - /tmp/tomes/ml/automl/inference/auto_ml_demo
  • xgboost error - see below.
  • Cmd 3 High Level Process diagram minor mispelling: infernece -> inference
at org.apache.spark.scheduler.DAGScheduler$$anonfun$cleanUpAfterSchedulerStop$1.apply(DAGScheduler.scala:1102)
	at org.apache.spark.scheduler.DAGScheduler$$anonfun$cleanUpAfterSchedulerStop$1.apply(DAGScheduler.scala:1100)
	at scala.collection.mutable.HashSet.foreach(HashSet.scala:78)
	at org.apache.spark.scheduler.DAGScheduler.cleanUpAfterSchedulerStop(DAGScheduler.scala:1100)
	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onStop(DAGScheduler.scala:2592)
	at org.apache.spark.util.EventLoop.stop(EventLoop.scala:84)
	at org.apache.spark.scheduler.DAGScheduler.stop(DAGScheduler.scala:2503)
	at org.apache.spark.SparkContext$$anonfun$stop$6.apply$mcV$sp(SparkContext.scala:2107)
	at org.apache.spark.util.Utils$.tryLogNonFatalError(Utils.scala:1506)
	at org.apache.spark.SparkContext.stop(SparkContext.scala:2106)
	at org.apache.spark.TaskFailedListener$$anon$1$$anonfun$run$1.apply$mcV$sp(SparkParallelismTracker.scala:131)
	at org.apache.spark.TaskFailedListener$$anon$1$$anonfun$run$1.apply(SparkParallelismTracker.scala:131)
	at org.apache.spark.TaskFailedListener$$anon$1$$anonfun$run$1.apply(SparkParallelismTracker.scala:131)
	at scala.util.DynamicVariable.withValue(DynamicVariable.scala:58)
	at org.apache.spark.TaskFailedListener$$anon$1.run(SparkParallelismTracker.scala:130)
	at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:893)
	at org.apache.spark.SparkContext.runJob(SparkContext.scala:2243)
	at org.apache.spark.SparkContext.runJob(SparkContext.scala:2265)
	at org.apache.spark.SparkContext.runJob(SparkContext.scala:2284)
	at org.apache.spark.SparkContext.runJob(SparkContext.scala:2309)
	at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:961)
	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
	at org.apache.spark.rdd.RDD.withScope(RDD.scala:379)
	at org.apache.spark.rdd.RDD.collect(RDD.scala:960)
	at org.apache.spark.RangePartitioner$.sketch(Partitioner.scala:309)
	at org.apache.spark.RangePartitioner.<init>(Partitioner.scala:171)
	at org.apache.spark.RangePartitioner.<init>(Partitioner.scala:151)
	at org.apache.spark.rdd.OrderedRDDFunctions$$anonfun$sortByKey$1.apply(OrderedRDDFunctions.scala:62)
	at org.apache.spark.rdd.OrderedRDDFunctions$$anonfun$sortByKey$1.apply(OrderedRDDFunctions.scala:61)
	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
	at org.apache.spark.rdd.RDD.withScope(RDD.scala:379)
	at org.apache.spark.rdd.OrderedRDDFunctions.sortByKey(OrderedRDDFunctions.scala:61)
	at org.apache.spark.mllib.evaluation.BinaryClassificationMetrics.x$4$lzycompute(BinaryClassificationMetrics.scala:155)
	at org.apache.spark.mllib.evaluation.BinaryClassificationMetrics.x$4(BinaryClassificationMetrics.scala:146)
	at org.apache.spark.mllib.evaluation.BinaryClassificationMetrics.confusions$lzycompute(BinaryClassificationMetrics.scala:148)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.