Git Product home page Git Product logo

examples's Introduction

MOVED: docker-flink

This repo has moved to apache/flink-docker and will receive no further updates.

Build Status

Docker packaging for Apache Flink

Use add-version.sh to rebuild the Dockerfiles and all variants for a particular Flink release release. Before running this, you must first delete the existing release directory.

usage: ./add-version.sh -r flink-release -f flink-version

Example

$ rm -r 1.2
$ ./add-version.sh -r 1.2 -f 1.2.1

Stackbrew Manifest

generate-stackbrew-library.sh is used to generate the library file required for official Docker Hub images.

When this repo is updated, the output of this script should be used to replaced the contents of library/flink in the Docker official-images repo via a PR.

Note: running this script requires the bashbrew binary and a compatible version of Bash. The Docker image plucas/docker-flink-build contains these dependencies and can be used to run this script.

Example:

docker run --rm \
    --volume /path/to/docker-flink:/build \
    plucas/docker-flink-build \
    /build/generate-stackbrew-library.sh \
> /path/to/official-images/library/flink

License

Licensed under the Apache License, Version 2.0: https://www.apache.org/licenses/LICENSE-2.0

Apache Flink, Flink®, Apache®, the squirrel logo, and the Apache feather logo are either registered trademarks or trademarks of The Apache Software Foundation.

examples's People

Contributors

patricklucas avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

examples's Issues

Job manager Liveliness prob

livenessProbe: httpGet: path: /overview port: 8081 initialDelaySeconds: 30 periodSeconds: 10

A livelinessProbe is important, especially in HA. We want to make sure the Job Manager was elected as a leader. Sometimes I observed it getting stuck if the Job manager start before the Zookeeper cluster is up and running. It hangs waiting for Zookeeper to elect a leader and never completes.
This livenessProbe has fixed that issue.

If the job manager cannot see a leader, it will return an error, which will get captured by the livenessProbe. Kubernetes will eventually restart the container

What is the status of this chart?

Hi there,

Currently flink documentation points to this Chart (see here).
However it seems that this chart is no longer maintained, is this still the official chart or is there another one in active development?

Thanks!

Flink on Amazon ECS crashes in "WikiEdit" example

Hi!
I'm trying to create a Flink cluster on Amazon ECS with jobmanager and taskmanager in different task definitions. If I try to submit other examples available in Flink's website, seems work. However, the "WikiEdit" example that I'm sending the result to my kafka cluster, crashes in 90~110 seconds. If I try to deploy the same structure local (via Docker-Compose) this works like a charm. Any idea? :(

Logs:

2018-06-20 13:39:00,001 INFO org.apache.flink.runtime.taskmanager.Task - Attempting to fail task externally Window(TumblingProcessingTimeWindows(30000), ProcessingTimeTrigger, FoldFunction$2, PassThroughWindowFunction) -> Map -> Sink: Unnamed (1/1) (5d360677897b1276468e0a4727c6ba1f).
2018-06-20 13:39:00,002 INFO org.apache.flink.runtime.taskmanager.Task - Window(TumblingProcessingTimeWindows(30000), ProcessingTimeTrigger, FoldFunction$2, PassThroughWindowFunction) -> Map -> Sink: Unnamed (1/1) (5d360677897b1276468e0a4727c6ba1f) switched from RUNNING to FAILED.
TimerException{org.apache.flink.streaming.runtime.tasks.ExceptionInChainedOperatorException: Could not forward element to next operator}
at org.apache.flink.streaming.runtime.tasks.SystemProcessingTimeService$TriggerTask.run(SystemProcessingTimeService.java:284)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.flink.streaming.runtime.tasks.ExceptionInChainedOperatorException: Could not forward element to next operator
at org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.pushToOperator(OperatorChain.java:577)
at org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:535)
at org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:515)
at org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:679)
at org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:657)
at org.apache.flink.streaming.api.operators.TimestampedCollector.collect(TimestampedCollector.java:51)
at org.apache.flink.streaming.api.functions.windowing.PassThroughWindowFunction.apply(PassThroughWindowFunction.java:36)
at org.apache.flink.streaming.runtime.operators.windowing.functions.InternalSingleValueWindowFunction.process(InternalSingleValueWindowFunction.java:46)
at org.apache.flink.streaming.runtime.operators.windowing.WindowOperator.emitWindowContents(WindowOperator.java:550)
at org.apache.flink.streaming.runtime.operators.windowing.WindowOperator.onProcessingTime(WindowOperator.java:505)
at org.apache.flink.streaming.api.operators.HeapInternalTimerService.onProcessingTime(HeapInternalTimerService.java:266)
at org.apache.flink.streaming.runtime.tasks.SystemProcessingTimeService$TriggerTask.run(SystemProcessingTimeService.java:281)
... 7 more
Caused by: org.apache.flink.streaming.runtime.tasks.ExceptionInChainedOperatorException: Could not forward element to next operator
at org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.pushToOperator(OperatorChain.java:577)
at org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:535)
at org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:515)
at org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:679)
at org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:657)
at org.apache.flink.streaming.api.operators.StreamMap.processElement(StreamMap.java:41)
at org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.pushToOperator(OperatorChain.java:560)
... 18 more
Caused by: org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 67 ms.
2018-06-20 13:39:00,014 INFO org.apache.flink.runtime.taskmanager.Task - Triggering cancellation of task code Window(TumblingProcessingTimeWindows(30000), ProcessingTimeTrigger, FoldFunction$2, PassThroughWindowFunction) -> Map -> Sink: Unnamed (1/1) (5d360677897b1276468e0a4727c6ba1f).
2018-06-20 13:39:00,033 ERROR org.apache.flink.streaming.runtime.tasks.StreamTask - Error during disposal of stream operator.
org.apache.kafka.common.KafkaException: java.lang.InterruptedException
at org.apache.kafka.clients.producer.KafkaProducer.close(KafkaProducer.java:424)
at org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducerBase.close(FlinkKafkaProducerBase.java:319)
at org.apache.flink.api.common.functions.util.FunctionUtils.closeFunction(FunctionUtils.java:43)
at org.apache.flink.streaming.api.operators.AbstractUdfStreamOperator.dispose(AbstractUdfStreamOperator.java:117)
at org.apache.flink.streaming.runtime.tasks.StreamTask.disposeAllOperators(StreamTask.java:473)
at org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:374)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:703)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.InterruptedException
at java.lang.Object.wait(Native Method)
at java.lang.Thread.join(Thread.java:1252)
at java.lang.Thread.join(Thread.java:1326)
at org.apache.kafka.clients.producer.KafkaProducer.close(KafkaProducer.java:422)
... 7 more
2018-06-20 13:39:00,037 INFO org.apache.flink.runtime.taskmanager.Task - Freeing task resources for Window(TumblingProcessingTimeWindows(30000), ProcessingTimeTrigger, FoldFunction$2, PassThroughWindowFunction) -> Map -> Sink: Unnamed (1/1) (5d360677897b1276468e0a4727c6ba1f).
2018-06-20 13:39:00,082 INFO org.apache.flink.runtime.taskmanager.Task - Ensuring all FileSystem streams are closed for task Window(TumblingProcessingTimeWindows(30000), ProcessingTimeTrigger, FoldFunction$2, PassThroughWindowFunction) -> Map -> Sink: Unnamed (1/1) (5d360677897b1276468e0a4727c6ba1f) [FAILED]
2018-06-20 13:39:00,083 INFO org.apache.flink.runtime.taskexecutor.TaskExecutor - Un-registering task and sending final execution state FAILED to JobManager for task Window(TumblingProcessingTimeWindows(30000), ProcessingTimeTrigger, FoldFunction$2, PassThroughWindowFunction) -> Map -> Sink: Unnamed 5d360677897b1276468e0a4727c6ba1f.
2018-06-20 13:39:00,105 INFO org.apache.flink.runtime.taskmanager.Task - Attempting to cancel task Source: Custom Source (1/1) (b7ab2070ada6626b3a1ab3f749a58d2a).
2018-06-20 13:39:00,105 INFO org.apache.flink.runtime.taskmanager.Task - Source: Custom Source (1/1) (b7ab2070ada6626b3a1ab3f749a58d2a) switched from RUNNING to CANCELING.
2018-06-20 13:39:00,106 INFO org.apache.flink.runtime.taskmanager.Task - Triggering cancellation of task code Source: Custom Source (1/1) (b7ab2070ada6626b3a1ab3f749a58d2a).
2018-06-20 13:39:00,110 INFO org.apache.flink.runtime.taskmanager.Task - Source: Custom Source (1/1) (b7ab2070ada6626b3a1ab3f749a58d2a) switched from CANCELING to CANCELED.
2018-06-20 13:39:00,110 INFO org.apache.flink.runtime.taskmanager.Task - Freeing task resources for Source: Custom Source (1/1) (b7ab2070ada6626b3a1ab3f749a58d2a).
2018-06-20 13:39:00,114 INFO org.apache.flink.runtime.taskexecutor.TaskExecutor - Discarding the results produced by task execution 5d360677897b1276468e0a4727c6ba1f.
2018-06-20 13:39:00,117 INFO org.apache.flink.runtime.taskmanager.Task - Ensuring all FileSystem streams are closed for task Source: Custom Source (1/1) (b7ab2070ada6626b3a1ab3f749a58d2a) [CANCELED]
2018-06-20 13:39:00,117 INFO org.apache.flink.runtime.taskexecutor.TaskExecutor - Un-registering task and sending final execution state CANCELED to JobManager for task Source: Custom Source b7ab2070ada6626b3a1ab3f749a58d2a.
2018-06-20 13:39:00,142 INFO org.apache.flink.runtime.taskexecutor.slot.TaskSlotTable - Free slot TaskSlot(index:0, state:ACTIVE, resource profile: ResourceProfile{cpuCores=1.0, heapMemoryInMB=42, directMemoryInMB=0, nativeMemoryInMB=0, networkMemoryInMB=0}, allocationId: a05f140ecc711d103e6f5059ff3a2d2a, jobId: a6be4c4207ddc3bbc4172f60cbc7512a).
2018-06-20 13:39:00,143 INFO org.apache.flink.runtime.taskexecutor.JobLeaderService - Remove job a6be4c4207ddc3bbc4172f60cbc7512a from job leader monitoring.
2018-06-20 13:39:00,143 INFO org.apache.flink.runtime.taskexecutor.TaskExecutor - Close JobManager connection for job a6be4c4207ddc3bbc4172f60cbc7512a.
2018-06-20 13:39:00,148 INFO org.apache.flink.runtime.taskexecutor.TaskExecutor - Close JobManager connection for job a6be4c4207ddc3bbc4172f60cbc7512a.
2018-06-20 13:39:00,148 INFO org.apache.flink.runtime.taskexecutor.JobLeaderService - Cannot reconnect to job a6be4c4207ddc3bbc4172f60cbc7512a because it is not registered.

HA mode not activated java.lang.NoClassDefFoundError: Could not initialize class org.apache.hadoop.security.SecurityUtil

I've setup a zookeeper:

helm --namespace vince2 install --name vince2-zookeeper incubator/zookeeper

created a values.yaml

cat > values.yaml <<EOF
flink:
num_taskmanagers: 3
highavailability:
enabled: true
zookeeper_quorum: vince2-zookeeper-zookeeper
state_s3_bucket: s3://my-bucket/vince2/ha
EOF

( don't need AWS keys, I have instanceProfile allowing any pod to read/write into that S3. Testing with another component)

Deploy FLink:

helm --namespace vince2 install --name=vince2-flink-ha --values=values.yaml flink-1.4.0.tgz

But the job manager logs says it is not starting with HA

2018-04-16 04:59:11,606 INFO org.apache.flink.runtime.jobmanager.JobManager - Starting JobManager without high-availability
2018-04-16 04:59:11,609 INFO org.apache.flink.runtime.jobmanager.JobManager - Starting JobManager on vince2-flink-ha-flink-jobmanager:6123 with execution mode CLUSTER

It seems the HA settings didn't make it to the configmap:

flink-conf.yaml: | blob.server.port: 6124 jobmanager.rpc.address: vince2-flink-ha-flink-jobmanager jobmanager.rpc.port: 6123 jobmanager.heap.mb: 1024 taskmanager.heap.mb: 1024 taskmanager.numberOfTaskSlots: 1

Flink and Calico on Kubernetes connection reset / PUT operation failed

I'm not sure it should be posted here. Please let me know if not appropriate.

We run Flink in Kubernetes 1.8 in AWS. It's been fine for monthsWe can also make it work with this helm. . I've setup a new k8s clusters recently.Everything the same EXCEPT we enabled Calico (instead of using only Flannel)

Calico gives us networking between containers.

Since enabling Calico, Flink client receive this error when trying to send a jar file to job manager:

	at org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:478)
	at org.apache.flink.client.program.StandaloneClusterClient.submitJob(StandaloneClusterClient.java:105)
	at org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:442)
	at org.apache.flink.streaming.api.environment.StreamContextEnvironment.execute(StreamContextEnvironment.java:73)
	at au.com.nbnco.ourapp.connectivitytest.Application.executeJob(Application.java:26)
	at au.com.nbnco.ourapp.connectivitytest.Application.main(Application.java:14)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:528)
	at org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:419)
	at org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:381)
	at org.apache.flink.client.CliFrontend.executeProgram(CliFrontend.java:838)
	at org.apache.flink.client.CliFrontend.run(CliFrontend.java:259)
	at org.apache.flink.client.CliFrontend.parseParameters(CliFrontend.java:1086)
	at org.apache.flink.client.CliFrontend$2.call(CliFrontend.java:1133)
	at org.apache.flink.client.CliFrontend$2.call(CliFrontend.java:1130)
	at org.apache.flink.runtime.security.HadoopSecurityContext$1.run(HadoopSecurityContext.java:43)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:422)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
	at org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:40)
	at org.apache.flink.client.CliFrontend.main(CliFrontend.java:1130)
Caused by: org.apache.flink.runtime.client.JobSubmissionException: Could not upload the jar files to the job manager.
	at org.apache.flink.runtime.client.JobSubmissionClientActor$1.call(JobSubmissionClientActor.java:154)
	at akka.dispatch.Futures$$anonfun$future$1.apply(Future.scala:95)
	at scala.concurrent.impl.Future$PromiseCompletingRunnable.liftedTree1$1(Future.scala:24)
	at scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:24)
	at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:40)
	at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:397)
	at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
	at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
	at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
	at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
Caused by: java.io.IOException: Could not retrieve the JobManager's blob port.
	at org.apache.flink.runtime.blob.BlobClient.uploadJarFiles(BlobClient.java:746)
	at org.apache.flink.runtime.jobgraph.JobGraph.uploadUserJars(JobGraph.java:584)
	at org.apache.flink.runtime.client.JobSubmissionClientActor$1.call(JobSubmissionClientActor.java:148)
	... 9 more
Caused by: java.io.IOException: PUT operation failed: Connection reset
	at org.apache.flink.runtime.blob.BlobClient.putInputStream(BlobClient.java:512)
	at org.apache.flink.runtime.blob.BlobClient.put(BlobClient.java:374)
	at org.apache.flink.runtime.blob.BlobClient.uploadJarFiles(BlobClient.java:772)
	at org.apache.flink.runtime.blob.BlobClient.uploadJarFiles(BlobClient.java:741)
	... 11 more
Caused by: java.net.SocketException: Connection reset
	at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:115)
	at java.net.SocketOutputStream.write(SocketOutputStream.java:155)
	at org.apache.flink.runtime.blob.BlobUtils.writeLength(BlobUtils.java:324)
	at org.apache.flink.runtime.blob.BlobClient.putInputStream(BlobClient.java:498)```

and Job manager says:

```java.lang.IllegalArgumentException: Invalid BLOB addressing for permanent BLOBs
	at org.apache.flink.util.Preconditions.checkArgument(Preconditions.java:139)
	at org.apache.flink.runtime.blob.BlobServerConnection.put(BlobServerConnection.java:337)
	at org.apache.flink.runtime.blob.BlobServerConnection.run(BlobServerConnection.java:114)
2018-03-27 06:28:16,069 INFO  org.apache.flink.runtime.jobmanager.JobManager                - Submitting job 11433fc332c7d76100fd08e6d1b623b4 (flink-job-connectivity-test).
2018-03-27 06:28:16,085 INFO  org.apache.flink.runtime.jobmanager.JobManager                - Using restart strategy NoRestartStrategy for 11433fc332c7d76100fd08e6d1b623b4.
2018-03-27 06:28:16,096 INFO  org.apache.flink.runtime.executiongraph.ExecutionGraph        - Job recovers via failover strategy: full graph restart
2018-03-27 06:28:16,105 INFO  org.apache.flink.runtime.jobmanager.JobManager                - Running initialization on master for job flink-job-connectivity-test (11433fc332c7d76100fd08e6d1b623b4).
2018-03-27 06:28:16,105 INFO  org.apache.flink.runtime.jobmanager.JobManager                - Successfully ran initialization on master in 0 ms.
2018-03-27 06:28:16,117 ERROR org.apache.flink.runtime.jobmanager.JobManager                - Failed to submit job 11433fc332c7d76100fd08e6d1b623b4 (flink-job-connectivity-test)
java.lang.NullPointerException
	at org.apache.flink.util.Preconditions.checkNotNull(Preconditions.java:58)
	at org.apache.flink.runtime.checkpoint.CheckpointStatsTracker.<init>(CheckpointStatsTracker.java:121)
	at org.apache.flink.runtime.executiongraph.ExecutionGraphBuilder.buildGraph(ExecutionGraphBuilder.java:228)
	at org.apache.flink.runtime.jobmanager.JobManager.org$apache$flink$runtime$jobmanager$JobManager$$submitJob(JobManager.scala:1277)
	at org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1.applyOrElse(JobManager.scala:447)
	at scala.runtime.AbstractPartialFunction.apply(AbstractPartialFunction.scala:36)
	at org.apache.flink.runtime.LeaderSessionMessageFilter$$anonfun$receive$1.applyOrElse(LeaderSessionMessageFilter.scala:38)
	at scala.runtime.AbstractPartialFunction.apply(AbstractPartialFunction.scala:36)
	at org.apache.flink.runtime.LogMessages$$anon$1.apply(LogMessages.scala:33)
	at org.apache.flink.runtime.LogMessages$$anon$1.apply(LogMessages.scala:28)
	at scala.PartialFunction$class.applyOrElse(PartialFunction.scala:123)
	at org.apache.flink.runtime.LogMessages$$anon$1.applyOrElse(LogMessages.scala:28)
	at akka.actor.Actor$class.aroundReceive(Actor.scala:502)
	at org.apache.flink.runtime.jobmanager.JobManager.aroundReceive(JobManager.scala:122)
	at akka.actor.ActorCell.receiveMessage(ActorCell.scala:526)
	at akka.actor.ActorCell.invoke(ActorCell.scala:495)
	at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:257)
	at akka.dispatch.Mailbox.run(Mailbox.scala:224)
	at akka.dispatch.Mailbox.exec(Mailbox.scala:234)
	at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
	at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
	at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
	at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)

It looks like the file cannot be transferred from the client to the job manager. I believe Invalid BLOB addressing is because the job manager did not receive any file.

I don't understand why we get Connection reset. I can see in tcpdump that the job manager and client use a number of ports to communicate and transfer file

Everything is the same. Works on one cluster. Does not work on another. Ports are configured the same. Every artefact is the same.

We don't have any NetworkPolicy. But would Calico enabled have some form of effect on networking?

Readme not up to date with passing additional options such as HA

The readme references the old way of passing the options to the Flink template. A recent commit has broken that.
Example of what to use:

flink: jobmanager_heap_mb: 1024 taskmanager_heap_mb: 8192 num_taskmanagers: 2 num_slots_per_taskmanager: 4 config: | taskmanager.memory.preallocate: false parallelism.default: 4 high-availability: zookeeper high-availability.zookeeper.quorum: .... high-availability.zookeeper.path.root: /flink high-availability.zookeeper.path.cluster-id: /default_ns high-availability.storageDir: s3://... high-availability.zookeeper.storageDir: s3://.... high-availability.jobmanager.port: 6123 high-availability.zookeeper.client.acl: open

Passing the additional options this way works for me.

Im happy to propose a PR if this project is active?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.