Git Product home page Git Product logo

cascading's People

Contributors

ahmed--mohsen avatar avibryant avatar benpence avatar cchepelov avatar colinmarc avatar cwensel avatar daniel-sudz avatar dvryaboy avatar fhueske avatar fs111 avatar gianm avatar isnotinvain avatar jalkjaer avatar jamesiry avatar jenniferlin avatar julienledem avatar lukasnalezenec avatar mickaellcr avatar nahguam avatar nandorkollar avatar ohrite avatar patduin avatar rdblue avatar redshiftetl avatar sritchie avatar toktarev avatar tomwhite avatar tsdeng avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

cascading's Issues

S3Tap parses bucket names incorrectly

Using an aws endpoint and, separately, an s3 URL of the form s3://my_tenant:my-bucket (including underscores)

In the S3Tap constructor (line 824 in my version) this line is causing problems:
this.bucketName = identifier.getHost();

The URI host is null. However the URI authority contains 'my_tenant:my-bucket', which is the full name of the bucket I need to access.

Casting issue in PropertyUtil

I am encountering an issue when running unit tests in a custom library that has Cadence as a dependency. This block of code has an issue when casting a Class to a String. The call to the PropertyUtil function is coming from AppProps.getApplicationJarClass. Here is the stack trace:

java.lang.ClassCastException: java.lang.Class cannot be cast to java.lang.String at cascading.property.AppProps.getApplicationJarClass(AppProps.java:113) at cascading.flow.hadoop.planner.HadoopPlanner.initialize(HadoopPlanner.java:166) at com.anon.cascading_ext.flow.LoggingHadoopPlanner.initialize(LoggingHadoopPlanner.java:75) at com.anon.cascading_ext.flow.LoggingFlowConnector.connect(LoggingFlowConnector.java:68) at cascading.flow.FlowConnector.connect(FlowConnector.java:421) at cascading.flow.FlowConnector.connect(FlowConnector.java:270) at cascading.flow.FlowConnector.connect(FlowConnector.java:215) at cascading.flow.FlowConnector.connect(FlowConnector.java:197) at com.anon.formats.mapred.inputformat.TestDirectoryLineInputFormat.testGetCurrentTapSourcePath(TestDirectoryLineInputFormat.java:61)

This could be solved by checking if defaultValue is an instance of String and using the Class.toString function. My org is in the process of upgrading from a very old version of Cascading to Cascading 4.5.0. In the version that we are using, AppProp.getApplicationJarClass actually passes a null Class object as the default value.

Is there a recommended method of fixing this casting issue?

Running into this on JDK 8, Cascading 4.5.0

Move counter duration measurements to aggregated nanos

Step/SliceCounters sums durations in wall clock milliseconds, but some read or write operations may be very short.

The fear is we are introducing a lot of error by summing ms interval durations. Using a Stopwatch with nanos granularity, that then is periodically published on a shared timer may improve relative accuracy.

NestedRegexFilter is not building a nested pointer

Instead is using the #compile() method to generate a pointer, subsequently it is not capturing nested structures.

This operation may need to be updated to apply patterns to each child element in the pointer.allAt method.

Upgrade Janino to 3.x

The upgrade requires some refactoring as the script and expression evaluators no longer have a shared base class.

Support parallelization of child partition .close() operations

By default, we don't leverage threading server/processing side since it's a highly constrained resource.

But in local mode, and when partitioning sink data into multiple files, parallelizing the close operation makes sense as the purges may include more than one partition.

Providing hooks to wrap the cache of open partitions purge and close operations would benefit applications that were designed specifically for partitioning data if the container hosting the application can provide cpus for the threads.

Possible to publish WIP to maven-central?

The Github Artifacts require authentication for read. This makes them hard to use seamlessly in other open source projects. Would it be possible to cross-publish to maven central or another maven-compatible repo with read enabled without auth?

Possible to keep java8 on 4.5 release?

hey @cwensel any chance we could keep java8 on the 4.5 branch at least? AWS EMR 6.X (latest release) still defaults on java8 for the hadoop3 flavors and it would be much appreciated to keep this support before they drop it. I would imagine that EMR is a very common use case for this type of library.

Task cleanup should not look for _temporary dirs when talking to s3

Seeing this exception sometimes talking to S3. Could be a race condition on directory (key) creates on S3, but there should be no task cleaup when using S3.

    at cascading.tap.hadoop.io.TapOutputCollector.close(TapOutputCollector.java:184)
    at cascading.tuple.TupleEntrySchemeCollector.close(TupleEntrySchemeCollector.java:245)
    at cascading.tap.partition.BasePartitionTap$PartitionCollector.closeCollector(BasePartitionTap.java:205)
    at cascading.tap.partition.BasePartitionTap$PartitionCollector.close(BasePartitionTap.java:190)
    at cascading.flow.stream.element.SinkStage.cleanup(SinkStage.java:148)
    at cascading.flow.stream.graph.StreamGraph.cleanup(StreamGraph.java:187)
    at cascading.flow.local.planner.LocalStepRunner.call(LocalStepRunner.java:204)
    at cascading.flow.local.planner.LocalStepRunner.call(LocalStepRunner.java:53)
    at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
    at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1135)
    at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
    at java.base/java.lang.Thread.run(Thread.java:844)
Caused by: java.io.FileNotFoundException: No such file or directory: s3a://bucket/test/_temporary/_attempt_002147483647_0000_m_000000_0/path/
    at org.apache.hadoop.fs.s3a.S3AFileSystem.s3GetFileStatus(S3AFileSystem.java:1931)
    at org.apache.hadoop.fs.s3a.S3AFileSystem.innerGetFileStatus(S3AFileSystem.java:1822)
    at org.apache.hadoop.fs.s3a.S3AFileSystem.getFileStatus(S3AFileSystem.java:1763)
    at org.apache.hadoop.fs.s3a.S3AFileSystem.innerListStatus(S3AFileSystem.java:1585)
    at org.apache.hadoop.fs.s3a.S3AFileSystem.listStatus(S3AFileSystem.java:1561)
    at cascading.tap.hadoop.util.Hadoop18TapUtil.moveTaskOutputs(Hadoop18TapUtil.java:326)
    at cascading.tap.hadoop.util.Hadoop18TapUtil.moveTaskOutputs(Hadoop18TapUtil.java:332)
    at cascading.tap.hadoop.util.Hadoop18TapUtil.moveTaskOutputs(Hadoop18TapUtil.java:332)
    at cascading.tap.hadoop.util.Hadoop18TapUtil.moveTaskOutputs(Hadoop18TapUtil.java:332)
    at cascading.tap.hadoop.util.Hadoop18TapUtil.moveTaskOutputs(Hadoop18TapUtil.java:332)
    at cascading.tap.hadoop.util.Hadoop18TapUtil.commitTask(Hadoop18TapUtil.java:174)
    at cascading.tap.hadoop.io.TapOutputCollector.close(TapOutputCollector.java:171)
    ... 11 more```

wrong about mapred/TaskCompletionEvent and mapreduce/TaskCompletionEvent

Exception in thread "main" java.lang.VerifyError: Inconsistent stackmap frames at branch target 83
Exception Details:
Location:
cascading/stats/hadoop/HadoopStepStats.captureDetail(Lcascading/stats/CascadingStats$Type;)V @83: aload_0
Reason:
Type '[Lorg/apache/hadoop/mapred/TaskCompletionEvent;' (current frame, locals[4]) is not assignable to '[Lorg/apache/hadoop/mapreduce/TaskCompletionEvent;' (stack map, locals[4])
Current Frame:
bci: @77
flags: { }
locals: { 'cascading/stats/hadoop/HadoopStepStats', 'cascading/stats/CascadingStats$Type', 'org/apache/hadoop/mapreduce/Job', integer, '[Lorg/apache/hadoop/mapred/TaskCompletionEvent;' }
stack: { integer }
Stackmap Frame:
bci: @83
flags: { }
locals: { 'cascading/stats/hadoop/HadoopStepStats', 'cascading/stats/CascadingStats$Type', 'org/apache/hadoop/mapreduce/Job', integer, '[Lorg/apache/hadoop/mapreduce/TaskCompletionEvent;' }
stack: { }

Hadoop version: 2.8.0

`TapOutputCollector` committing partial output on task failure

According to the implementation of TapOutputCollector#close and Hadoop18TapUtil#needsTaskCommit, work is always committed as long as something exists in the work path. As a result, I am seeing partial output files committed in S3 when I use a PartitionTap and my task fails after it has written some output.

Is this expected behaviour? I am seeing this in Cascading 3.1, which I understand is a few years old now, but any information here would be of great help

Thanks

HashJoin has problematic interaction with Merge

see this graph on 3.2.1
https://www.dropbox.com/s/iffadh9x7unrg5w/01-BalanceAssembly-init.dot.png?dl=0

You can see the full planner logs here:
https://www.dropbox.com/s/7qyc4a9pxtstwio/E552D2.tgz?dl=0\

We are merging two HashJoins after some Each operations. In this particular graph, it is possible to fix the issue by adding Checkpoints after all but one of the HashJoins it seems. This is not a great solution since even knowing what a graph will look like when you combine many pipes with functions is not very clear.

It would be great to have either a clear rule that we need to follow in generating the graphs, or to remove this restriction since we would like to using cascading 3 in scalding by default.

Thanks.

potential regression on scalding joinWithTiny on 4.5-wip

I've been testing scalding with newer cascading as a demo on a branch here: daniel-sudz/scalding#1.

I currently have the following bad output:

[info] - should merge and joinWithTiny shouldn't duplicate data *** FAILED ***
[info]   Set((1,3), (2,3), (3,3), (4,2)) was not equal to Set((1,2), (2,2), (3,2), (4,1)) (PlatformTest.scala:466)

it looks like there is some duplication going on considering 3 > 2 and 2 > 1. I saw that there was some previous discussion around this when cascading3 scalding branch was being developed before it got stalled. twitter/scalding#1592. The resolution there seemed to be a higher hadoop version so not really applicable here.

Not sure where to begin debugging this but would love some pointers.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.