Git Product home page Git Product logo

java-buildpack-memory-calculator's Introduction

Java Buildpack Memory Calculator

The Java Buildpack Memory Calculator calculates a holistic JVM memory configuration with the goal of ensuring that applications perform well while not exceeding a container's memory limit and being recycled.

In order to perform this calculation, the Memory Calculator requires the following input:

  • --total-memory: total memory available to the application, typically expressed with size classification (B, K, M, G, T)
  • --loaded-class-count: the number of classes that will be loaded when the application is running
  • --thread-count: the number of user threads
  • --jvm-options: JVM Options, typically JAVA_OPTS
  • --head-room: percentage of total memory available which will be left unallocated to cover JVM overhead

The Memory Calculator prints the calculated JVM configuration flags (excluding any that the user has specified in --jvm-options). If a valid configuration cannot be calculated (e.g. more memory must be allocated than is available), an error is printed and a non-zero exit code is returned. In order to override a calculated value, users should pass any of the standard JVM configuration flags into --jvm-options. The calculation will take these as fixed values and adjust the non-fixed values accordingly.

Install

$ go get -v github.com/cloudfoundry/java-buildpack-memory-calculator

Algorithm

The following algorithm is used to generate the holistic JVM memory configuration:

  1. Headroom amount is calculated as total memory * (head room / 100).

  2. If -XX:MaxDirectMemorySize is configured it is used for the amount of direct memory. If not configured, 10M (in the absence of any reasonable heuristic) is used.

  3. If -XX:MaxMetaspaceSize is configured it is used for the amount of metaspace. If not configured, then the value is calculated as (5800B * loaded class count) + 14000000b.

  4. If -XX:ReservedCodeCacheSize is configured it is used for the amount of reserved code cache. If not configured, 240M (the JVM default) is used.

  5. If -Xss is configured it is used for the size of each thread stack. If not configured, 1M (the JVM default) is used.

  6. If -Xmx is configured it is used for the size of the heap. If not configured, then the value is calculated as

    total memory - (headroom amount + direct memory + metaspace + reserved code cache + (thread stack * thread count))
    

Broadly, this means that for a constant application (same number of classes), the non-heap overhead is a fixed value. Any changes to the total memory will be directly reflected in the size of the heap. Adjustments to the non-heap memory configuration (e.g. stack size, reserved code cache) can result in larger heap sizes, but can also have negative runtime side effects that must be taken into account.

For example, with a 1G memory limit, you have a heap size of 1G - (0 headroom + 10M direct + X metaspace + 240M + 250 threads * 1M thread memory) which means you have heap space = 524M - X metaspace. Metaspace is often around 100M for a typical Spring Boot app, so in that situation it leaves us with around 424M of heap space. If you shift your memory limit to 768M then you end up with heap space = 268M - X metaspace or 168M with a typical Spring Boot app (100M metaspace). As you can see, when the memory limit goes below 1G, the formula used by the memory calculator prioritizes the non-heap space and heap space suffers.

Every application is different, but for best results, it is recommended that when running with a memory limit below 1G the user apply some manual adjustments to the memory limits. For example, you can lower the thread stack size, the number of threads, or the reserved code cache size. This will allow you to save more room for the heap. Just be aware that each of these tunings has a trade-off for your application in terms of scalability (threads) or performance (code cache), and this is why the memory calculator prioritizes these settings over the heap. As a human, you need to test/evaluate the trade-offs for a given application and decide what works best for the application.

Compressed class space size

According to the HotSpot GC Tuning Guide:

The MaxMetaspaceSize applies to the sum of the committed compressed class space and the space for the other class metadata.

Therefore the memory calculator does not set the compressed class space size (-XX:CompressedClassSpaceSize) since the memory for the compressed class space is bounded by the maximum metaspace size (-XX:MaxMetaspaceSize).

License

The Java Buildpack Memory Calculator is Open Source software released under the Apache 2.0 license.

java-buildpack-memory-calculator's People

Contributors

clnative avatar dmikusa avatar ekcasey avatar glyn avatar ikabdyushev avatar johannesebke avatar nebhale avatar pivotal-david-osullivan avatar timgerlach avatar vpavic avatar youngm avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

java-buildpack-memory-calculator's Issues

Calculated JVM memory configuration is propably wrong

I have a Spring Boot project and a docker image build by mvn spring-boot:build-image. When I run the image with docker run (with just a port parameter), I can see a warning:

Setting Active Processor Count to 8
WARNING: Container memory limit unset. Configuring JVM for 1G container.
Calculated JVM Memory Configuration: -XX:MaxDirectMemorySize=10M -Xmx192904K -XX:MaxMetaspaceSize=343671K -XX:ReservedCodeCacheSize=240M -Xss1M (Total Memory: 1G, Thread Count: 250, Loaded Class Count: 58262, Headroom: 0%)

When I run docker stats I get this:

CONTAINER ID        NAME                 CPU %               MEM USAGE / LIMIT     MEM %               NET I/O             BLOCK I/O           PIDS
2c14517cf3e1        laughing_albattani   0.17%               528.2MiB / 5.966GiB   8.65%               1.26kB / 0B         0B / 0B             48

So why it configures JVM just for 1GB? And Xmx for only 200MB? I would expect much higher values.

Heuristics to compute Xms (min heap size) value

The min heap size (aka "-Xms") value is an important value on startup: keeping the default value (128 MB) implies slow startup times when all classes loaded at startup cannot fit in this pre-allocated memory space.

It can even be the main cause of OutOfMemory errors if the growth rate is higher than the JVM/container ability to allocate more memory on-demand.

Could you try to create some heuristics to compute the min heap size according to the number of classes to load ?

Document all command line flag in the README file

Without access to a built memory calculator to get the usage instructions the source code has to be read to know what the possible usage flags are. Even better, paste the command line usage instructions in to the README file.

Handling unrecognized command line options

The memory calculator appears to ignore unrecognized command line options. For example:

$ java-buildpack-memory-calculator-linux -totMemory 128m -memorySizes 'metaspace:64m' memoryWeights 'heap:75,metaspace:10,stack:5,native:10'
The allocated Java memory sizes total 0 which is less than 0.8 of the available memory, so configured Java memory sizes may be too small or available memory may be too large
$ echo $?
0

ignores memoryWeights. It would be better to fail with a suitable message.

Allow to specify ratio of allocation of available memory between heap and direct memory

Applications that heavily use NIO, like some Netty-based ones, sometimes benefit from more direct memory as the overall memory available to the JVM increases. This is because direct memory does not have, in itself, garbage collection, but rather it relies on APIs like Cleaner or other phantom reference-based ones to free direct memory as a consequence of evicting objects from heap (like ByteBuffers).

We actually have one such use-case in the Instana Agent, which is a Netty-based application that uses significantly more direct memory than most Java apps. When the Instana Agent runs in a container, we usually want to allocate roughly 1MB to direct memory for each 3MB to the heap, although heavy usage of some functionality skews that ratio one way or the other.

We have forked the memory calculator for tweaking it to our needs, but we'd be happy to contribute back if this feature is interesting to this project.

Stack memory calculation

We run our app on pivotal's CF which runs memory calculator 3.9.0_RELEASE.
Our JVM memory was 300MB less than our container limit, so I ran the memory calculator manually and realised that the -stackThreads argument affects the -Xmx value.
Based on the docs it should decrease the heap with the calculated stack and increase stack size BUT the stack value remains -Xss1M no matter what the stackThread param is:

$ ./java-buildpack-memory-calculator-3.9.0_RELEASE -totMemory=1280M -loadedClasses=27000 -stackThreads=1 -poolType=metaspace -vmOptions=""
-XX:ReservedCodeCacheSize=240M -XX:CompressedClassSpaceSize=26269K -XX:MaxDirectMemorySize=10M -XX:MaxMetaspaceSize=166601K -Xss1M -Xmx860824K
$ ./java-buildpack-memory-calculator-3.9.0_RELEASE -totMemory=1280M -loadedClasses=27000 -stackThreads=300 -poolType=metaspace -vmOptions=""
-XX:ReservedCodeCacheSize=240M -XX:CompressedClassSpaceSize=26269K -XX:MaxDirectMemorySize=10M -XX:MaxMetaspaceSize=166601K -Xss1M -Xmx554648K

I see two problems:

  1. In memory/allocator.go, the calculateMaxHeapSize method never sets the stack size just decreases the heap = we lose memory.
  2. 1Mb/thread stack feels way too much for stack size compared to the JVM default 1MB which isn't tuned for a single threaded application.

Rationale behind the reserved code cache increase with Java 8

Hello,

The calculator's README file states that:

For Java 8 and later, the memory calculator sets the maximum metaspace size (-XX:MaxMetaspaceSize) based on the number of classes that will be loaded and sets the reserved code cache size (-XX:ReservedCodeCacheSize) to 240 Mb.

For Java 7, it sets the maximum permanent generation size (-XX:MaxPermSize) based on the number of classes that will be loaded and sets the reserved code cache size (-XX:ReservedCodeCacheSize) to 48 Mb.

And also the linked google document states:

Memory calculator v3 estimates:

  • maximum metaspace size
  • compressed class space size

based on the number of class files in the application. It sets the following to constants:

  • reserved code cache size (48 Mb on Java 7; 240 Mb on Java 8)
  • maximum direct memory size (10 Mb)
    and then sets the heap size to the remainder of total memory.

But the rationale behind the increase of reserved code cache from 48 Mb to 240 Mb is not explained anywhere. Would it be possible to explain it?

Thanks in advance!

Log messages should end with newlines

Log lines do not have newlines and this results in one of the following:

  • a log message does not appear on logs if there is no another message with newline following it
  • a log message becomes a prefix for the next message

Add Support for the IBM JRE Memory calculation

Hi,

As the new JRE Component (IBM JRE) is added to the java buildpack, We have written a doc which explains about the vm options corresponding to IBM JRE which are not present in the current memory calculator. We would like to update the memory calculator to support IBM JRE by emitting these options.

You can find the document here

fyi - @ashu-mehra @dinogun

Adjust ReservedCodeCacheSize

I have a very small application, and the ReservedCodeCacheSize of 240M is not appropriate and stealing the heap that I need. You can configure some things with the Environment variable, but I didn't see any way to tweak this parameter, or change the Java Parameters directly.

Possibility to override memory instead of detection

It would be good to be able to set a max total to cater for retarded platforms like Azure App Service where you cannot actually set a limit for the Linux containers. So instead of detecting the underlying platform max, I'd like to set some var MAX_MEM=1g, and let the calculator do it's calculation based on this max.

There is https://learn.microsoft.com/en-us/azure/app-service/configure-custom-container?pivots=container-windows#customize-container-memory - but it only works for windows containers.

Memory Calculator should prioritize last instance of command switch instead of first

Tested this against the excellent JBP 4.3 release.

I would like to provide default memory configuration in my java_opts.yml and allow users to override that using the JAVA_OPTS app environment variable.

It appears Java takes the last of a duplicated parameter but the memory calculator appears to take the first.

java -Xss1M -Xss256k -XX:+PrintFlagsFinal | grep "intx ThreadStackSize"
     intx ThreadStackSize                          := 256                                 {pd product}

It appears in the example below that memory calculator prefers the first value. JBP appears to correctly place the environment JAVA_OPTS at the end of the start command. However, the memory calculator will think the application can use more heap than it really can.

cf set-env testapp JBP_CONFIG_JAVA_OPTS '{ java_opts: "-Xss256k" }'
cf set-env testapp JAVA_OPTS "-Xss1M"

Create a binary release for using independently of the platform

Thanks for this solution!

Maybe you can use a Github release and/or goreleaser.

The use case is: starting a container in Kubernetes / docker-compose it will be nice to have a bash script running java -cp ... and appending the options generated by this app, just by curling for the binary and using it.

Impossible to compute configuration for Boot application and 256M total memory

A minimal Boot application is using 8000 classes. My typical non-trivial Spring Boot application (https://github.com/mixitconf/mixit) is loading around 10000 classes. Both run fine locally with -Xmx128M.

I would expect java-buildpack-memory-calculator to be able to compute a configuration with 256M of total memory, but it is currently not possible.

./java-buildpack-memory-calculator --loaded-class-count 8000 --thread-count 4 --total-memory 256m

allocated memory is greater than 256M available for allocation: -XX:MaxDirectMemorySize=10M, -XX:MaxMetaspaceSize=70312K, -XX:ReservedCodeCacheSize=240M, -Xss1M x 4 threads

The -XX:ReservedCodeCacheSize=240M looks suspicious to me.

Getting java.lang.OutOfMemoryError: Java heap space when using Spring kafka to produce a message

I'm trying to deploy a spring boot app on Pivotal Cloud Foundry to use spring kafka to send a message to kafka and getting below error with 1GB memory in manifest.yml. My sprint boot app has just one Rest Controller with only one endpoint to send a message. The app is starting without any issue if I change the memory to 2GB in my manifest.yml and it's taking almost 1.3 GB - 1.8 GB to run the app. I'm wondering why this simple app is taking so much more. Is it because of Kafka ?
In our app, we are using org.apache.kafka.clients.admin.KafkaAdminClient

We are using Spring 2.x, java 8 and buildpack 3.9 with this app.

[APP/PROC/WEB/0] OUT # java.lang.OutOfMemoryError: Java heap space
 {APP/PROC/WEB/0] OUT # -XX:OnOutOfMemoryError="/home/vcap/app/.java-buildpack/open_jdk_jre/bin/killjava.sh"

Any help to understand this issue would be greatly appreciated.

Please, provide Windows compiled application

It's not always possible to install everything you want on a computer. Can you provide the binary for Windows and MacOS directly from the Github page? By example: like the Linux version, in the "release" section.

Or maybe make a full web version of this tool ?

GC parallelism tweaks

I recently read this article which points out that the garbage collector might need to be tuned (XX:ParallelGCThreads XX:ConcGCThreads) to the number of CPUs in containers. Any thoughts about including this in the memory calculator?

XX:MaxDirectMemorySize and Spring Cloud Stream with Kafka

We've been struggling with a lot of "Direct Memory Buffer" errors in some of our kafka-based spring cloud stream applications, running on PCF.

We noticed it mostly occurs while a single consumer is assigned multiple topic partitions.

018-02-05T18:26:30.178-06:00 [APP/PROC/WEB/4] [OUT] java.lang.OutOfMemoryError: Direct buffer memory
2018-02-05T18:26:30.179-06:00 [APP/PROC/WEB/4] [OUT] at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:71) ~[kafka-clients-0.10.1.1.jar!/:na]
2018-02-05T18:26:30.179-06:00 [APP/PROC/WEB/4] [OUT] at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:154) ~[kafka-clients-0.10.1.1.jar!/:na]
2018-02-05T18:26:30.179-06:00 [APP/PROC/WEB/4] [OUT] at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:135) ~[kafka-clients-0.10.1.1.jar!/:na]
2018-02-05T18:26:30.179-06:00 [APP/PROC/WEB/4] [OUT] at org.apache.kafka.common.network.Selector.poll(Selector.java:291) ~[kafka-clients-0.10.1.1.jar!/:na]
2018-02-05T18:26:30.179-06:00 [APP/PROC/WEB/4] [OUT] at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:232) ~[kafka-clients-0.10.1.1.jar!/:na]
2018-02-05T18:26:30.179-06:00 [APP/PROC/WEB/4] [OUT] at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:979) ~[kafka-clients-0.10.1.1.jar!/:na]

I believe this issue is related to the fact that the build pack assigns only 10 megabyes to the direct memory buffer, hardcoded.

With kafka using nio for all its processing, this can be the reason for our error.

I believe the buildpack should calculate the value for XX:MaxDirectMemorySize based on additional parameters, and if spring-cloud-stream-kafka-binder is in the classpath, it should increase the value.

Take garbage collector into account

Hi,

I'm incredibly grateful for this project, but I encountered an unexpected problem yesterday. I was deploying a container with settings following this project. But the container kept going OOM. When observing the JVM process, there was a difference of 300MB to 400MB between the rrs value reported by ps and the sum of JVM memory areas, on a 3GB heap with Java 11. All limits were enforced.

From what I can tell, this was memory used by the garbage collector G1. This difference was significantly smaller for parallel and serial GCs, with a minimum of 50MB to 100MB for parallel GC. It seems that the GC does not limit itself, or simply cannot work with less than these values.

I understand that it is very difficult, maybe impossible, to predict these values. But maybe it's possible to at least gather data on this for documentation purposes. Right now, even using all of the heuristics in this project, I still have to guess for the right amount of headroom in a trial-and-error process.

Taking into account frameworks like Dynatrace OneAgent

Does the memory calculator take into account the overhead memory usage of frameworks such as Dynatrace OneAgent, integrated with the java buildpack? If yes, what memory(memory region in JVM or headroom) is used by such frameworks - so as to be able to tweak that memory's settings?

Stack size flag (-Xss) not set explicitly by default

It looks like the memory calculator considers a default stack size of 1M for the calculation, if -Xss is not set in JAVA_OPTS manually by developers. However, if this default value is used, no -Xss flag is ever given to the JVM which leads to the default stack size being used by the JVM.

It's probably a minor issue since most JVMs use less than or exactly 1M as stack size by default (e.g. OpenJDK JRE), but it's a potential inconsistency between the memory calculation and the actual limit taken into account during runtime. Also, when the JVM uses a default stack size less than 1M, e.g. when using a different JRE, the memory calculator considers too much memory for stacks (stack size * stack threads).

Therefore, and for the sake of transparency, I think it would make sense to add -Xss1024k explicitly in case the memory calculator uses its default value.

What do you think?

Allow to specify ratio of allocation of available memory to young generation in heap

In applications that should optimise for minor garbage collection (which is most of the cloud-based one in my experience), it is useful to set the -Xmn to fix how much memory can be allocated in the heap for young generation. This parameter usually makes sense a fraction of the total heap memory, so that allocating more memory to the surrounding container lets the memory allocated to young generation expand linearly with the total memory available to the heap.

We have implemented this feature in the forked the memory calculator used by the Instana Agent, but we'd be happy to contribute back if this feature is interesting to this project.

Misconfiguration warning misbehaviour

The memory calculator sometimes issues a warning message (warning, presumably, because the exit status code is 0) but fails to print the memory settings. For example:

$ java-buildpack-memory-calculator-linux -totMemory 128m -memorySizes 'metaspace:64m' memoryWeights 'heap:75,metaspace:10,stack:5,native:10'
The allocated Java memory sizes total 0 which is less than 0.8 of the available memory, so configured Java memory sizes may be too small or available memory may be too large
$ echo $?
0

This should either be a warning, with memory settings printed, or an error with non-zero exit status. The problem with making it an error is if the values passed in are intentional.

Getting way too low Xmx numbers with a regular Spring Boot app

I've put the sample app here: https://github.com/berndgoetz/cf-jbp-memcalc-test

The manifest defines 768M of memory for the container. My expectation is that
around 75% or more of that memory is given to the JRE as max heap space, hence around 576M.

But what you can see at startup is

 2019-04-19 22:39:03 [APP/PROC/WEB/0] OUT JVM Memory Configuration: -Xmx81033K -Xss1M -XX:ReservedCodeCacheSize=240M -XX:MaxDirectMemorySize=10M -XX:MaxMetaspaceSize=193398K 

The calculator gives the Java process -Xmx81033K which is about 79M only, about 10%! What happens with the rest of the memory?

The Java Buildpack version is 4.17.2. Memory Calculator version is 3.13.0

From the startup logs:

2019-04-19 22:35:39 [STG/0] OUT Downloading java_buildpack...
2019-04-19 22:36:03 [STG/0] OUT Downloaded java_buildpack (644.2M)
2019-04-19 22:36:03 [STG/0] OUT Cell 697bc229-5736-47b7-8d02-070b04734ebb creating container for instance 0eb8deba-f4f5-4ff3-81ca-8ca4b6b60123
2019-04-19 22:36:04 [STG/0] OUT Cell 697bc229-5736-47b7-8d02-070b04734ebb successfully created container for instance 0eb8deba-f4f5-4ff3-81ca-8ca4b6b60123
2019-04-19 22:36:04 [STG/0] OUT Downloading build artifacts cache...
2019-04-19 22:36:04 [STG/0] OUT Downloading app package...
2019-04-19 22:36:05 [STG/0] OUT Downloaded build artifacts cache (132B)
2019-04-19 22:36:08 [STG/0] OUT Downloaded app package (72.8M)
2019-04-19 22:36:10 [STG/0] OUT �[1m�[31m----->�[0m�[22m �[1m�[34mJava Buildpack�[0m�[22m �[34mv4.17.2�[0m �[34m(offline)�[0m | https://github.com/cloudfoundry/java-buildpack.git#6ce39cf
2019-04-19 22:36:10 [STG/0] OUT �[1m�[31m----->�[0m�[22m Downloading �[1m�[34mJvmkill Agent�[0m�[22m �[34m1.16.0_RELEASE�[0m from https://java-buildpack.cloudfoundry.org/jvmkill/bionic/x86_64/jvmkill-1.16.0_RELEASE.so �[3m�[32m(found in cache)�[0m�[23m
2019-04-19 22:36:10 [STG/0] OUT �[1m�[31m----->�[0m�[22m Downloading �[1m�[34mOpen Jdk JRE�[0m�[22m �[34m11.0.2_09�[0m from https://java-buildpack.cloudfoundry.org/openjdk/bionic/x86_64/openjdk-11.0.2_09.tar.gz �[3m�[32m(found in cache)�[0m�[23m
2019-04-19 22:36:11 [STG/0] OUT Expanding Open Jdk JRE to .java-buildpack/open_jdk_jre �[3m�[32m(1.2s)�[0m�[23m
2019-04-19 22:36:11 [STG/0] OUT JVM DNS caching disabled in lieu of BOSH DNS caching
2019-04-19 22:36:11 [STG/0] OUT �[1m�[31m----->�[0m�[22m Downloading �[1m�[34mOpen JDK Like Memory Calculator�[0m�[22m �[34m3.13.0_RELEASE�[0m from https://java-buildpack.cloudfoundry.org/memory-calculator/bionic/x86_64/memory-calculator-3.13.0_RELEASE.tar.gz �[3m�[32m(found in cache)�[0m�[23m
2019-04-19 22:36:12 [STG/0] OUT Loaded Classes: 30953, Threads: 250
2019-04-19 22:36:12 [STG/0] OUT �[1m�[31m----->�[0m�[22m Downloading �[1m�[34mClient Certificate Mapper�[0m�[22m �[34m1.8.0_RELEASE�[0m from https://java-buildpack.cloudfoundry.org/client-certificate-mapper/client-certificate-mapper-1.8.0_RELEASE.jar �[3m�[32m(found in cache)�[0m�[23m
2019-04-19 22:36:12 [STG/0] OUT �[1m�[31m----->�[0m�[22m Downloading �[1m�[34mContainer Security Provider�[0m�[22m �[34m1.16.0_RELEASE�[0m from https://java-buildpack.cloudfoundry.org/container-security-provider/container-security-provider-1.16.0_RELEASE.jar �[3m�[32m(found in cache)�[0m�[23m
2019-04-19 22:36:12 [STG/0] OUT �[1m�[31m----->�[0m�[22m Downloading �[1m�[34mSpring Auto Reconfiguration�[0m�[22m �[34m2.5.0_RELEASE�[0m from https://java-buildpack.cloudfoundry.org/auto-reconfiguration/auto-reconfiguration-2.5.0_RELEASE.jar �[3m�[32m(found in cache)�[0m�[23m
2019-04-19 22:36:27 [STG/0] OUT Exit status 0
2019-04-19 22:36:27 [STG/0] OUT Uploading droplet, build artifacts cache...
2019-04-19 22:36:27 [STG/0] OUT Uploading droplet...
2019-04-19 22:36:27 [STG/0] OUT Uploading build artifacts cache...
2019-04-19 22:36:27 [STG/0] OUT Uploaded build artifacts cache (132B)
2019-04-19 22:36:28 [API/5] OUT Creating droplet for app with guid cd96b714-22ae-4af7-8953-ac64d9f2228a
2019-04-19 22:38:34 [STG/0] OUT Uploaded droplet (115.7M)
2019-04-19 22:38:34 [STG/0] OUT Uploading complete
2019-04-19 22:38:34 [STG/0] OUT Cell 697bc229-5736-47b7-8d02-070b04734ebb stopping instance 0eb8deba-f4f5-4ff3-81ca-8ca4b6b60123
2019-04-19 22:38:34 [STG/0] OUT Cell 697bc229-5736-47b7-8d02-070b04734ebb destroying container for instance 0eb8deba-f4f5-4ff3-81ca-8ca4b6b60123
2019-04-19 22:38:34 [STG/0] OUT Cell 697bc229-5736-47b7-8d02-070b04734ebb successfully destroyed container for instance 0eb8deba-f4f5-4ff3-81ca-8ca4b6b60123
2019-04-19 22:38:35 [CELL/0] OUT Cell b3b3fcd1-d055-498f-993f-4d8d1b66b60b creating container for instance 96c95427-c6ca-486a-4400-6f43
2019-04-19 22:38:35 [CELL/0] OUT Cell b3b3fcd1-d055-498f-993f-4d8d1b66b60b successfully created container for instance 96c95427-c6ca-486a-4400-6f43
2019-04-19 22:39:03 [CELL/0] OUT Starting health monitoring of container
2019-04-19 22:39:03 [APP/PROC/WEB/0] OUT JVM Memory Configuration: -Xmx81033K -Xss1M -XX:ReservedCodeCacheSize=240M -XX:MaxDirectMemorySize=10M -XX:MaxMetaspaceSize=193398K 

I could of course fix the heap size to e.g. -Xmx512M in the manifest but that would be bad if I would like to change the memory dynamically.

Any explanation of what is happening here?

Calculation Default Values: ReservedCodeCache is NOT 240MB?

Hi there,

I'm using the openjdk buildpack with my CF application. I set the CF container to 512MB an wondered why my JVM max heap is set to ~100MB any immediately runs out of memory...

I now checked the memory calculation and saw, that ReservedCodeCache is assumed with JVM default of 240MB (as described here: https://github.com/cloudfoundry/java-buildpack-memory-calculator/blob/master/README.md).

But:
https://docs.oracle.com/javase/8/embedded/develop-apps-platforms/codecache.htm
The default is either 32MB or 48MB, but not 240MB.

So, where do the 240MB come from and why is this very high value assumed as default?

Cannot get latest version: module contains a go.mod file, so module path should be github.com/cloudfoundry/java-buildpack-memory-calculator/v4

Background

The github.com/cloudfoundry/java-buildpack-memory-calculator uses Go modules and the current release version is v4. And it’s module path is "github.com/cloudfoundry/java-buildpack-memory-calculator", instead of "github.com/cloudfoundry/java-buildpack-memory-calculator/v4". It must comply with the specification of "Releasing Modules for v2 or higher" available in the Modules documentation. Quoting the specification:

A package that has opted in to modules must include the major version in the import path to import any v2+ modules
To preserve import compatibility, the go command requires that modules with major version v2 or later use a module path with that major version as the final element. For example, version v2.0.0 of example.com/m must instead use module path example.com/m/v2.
https://github.com/golang/go/wiki/Modules#releasing-modules-v2-or-higher

Steps to Reproduce

GO111MODULE=on, run go get targeting any version >= v4.0.0 of the cloudfoundry/java-buildpack-memory-calculator:

$ go get github.com/cloudfoundry/[email protected]
go: finding github.com/cloudfoundry/java-buildpack-memory-calculator v4.1.0
go: finding github.com/cloudfoundry/java-buildpack-memory-calculator v4.1.0
go get github.com/cloudfoundry/[email protected]: github.com/cloudfoundry/[email protected]: invalid version: module contains a go.mod file, so major version must be compatible: should be v0 or v1, not v4

SO anyone using Go modules will not be able to easily use any newer version of cloudfoundry/java-buildpack-memory-calculator.

Solution

1. Kill the go.mod files, rolling back to GOPATH.

This would push them back to not being managed by Go modules (instead of incorrectly using Go modules).
Ensure compatibility for downstream module-aware projects and module-unaware projects projects

2. Fix module path to strictly follow SIV rules.

Patch the go.mod file to declare the module path as github.com/cloudfoundry/java-buildpack-memory-calculator/v4 as per the specs. And adjust all internal imports.
The downstream projects might be negatively affected in their building if they are module-unaware (Go versions older than 1.9.7 and 1.10.3; Or use third-party dependency management tools, such as: Dep, glide,govendor…).

If you don't want to break the above repos. This method can provides greater better backwards-compatibility.
Release a v2 or higher module through the major subdirectory strategy: Create a new v4 subdirectory (github.com/cloudfoundry/java-buildpack-memory-calculator/v4) and place a new go.mod file in that subdirectory. The module path must end with /v4. Copy or move the code into the v4 subdirectory. Update import statements within the module to also use /v4 (import "github.com/cloudfoundry/java-buildpack-memory-calculator/v4/…"). Tag the release with v4.x.y.

3. Suggest your downstream module users use hash instead of a version tag.

Don’t want to fix it. TIf the he standard rule of go modules conflicts with your development mode. Or not intended to be used as a library and does not make any guarantees about the API. So you can’t comply with the specification of "Releasing Modules for v2 or higher" available in the Modules documentation.
Regardless, since it's against one of the design choices of Go, it'll be a bit of a hack. Instead of go get github.com/cloudfoundry/java-buildpack-memory-calculator@version-tag, module users need to use this following way to get the cloudfoundry/java-buildpack-memory-calculator:
(1) Search for the tag you want (in browser)
(2) Get the commit hash for the tag you want
(3) Run go get github.com/cloudfoundry/java-buildpack-memory-calculator@commit-hash
(4) Edit the go.mod file to put a comment about which version you actually used
This will make it difficult for module users to get and upgrade cloudfoundry/java-buildpack-memory-calculator.

[*] You can see who will be affected here: [1 module users, e.g., wreulicke/emc]
https://github.com/wreulicke/emc/blob/v0.0.1/go.mod#L8

Summary

You can make a choice to fix DM issues by balancing your own development schedules/mode against the affects on the downstream projects.By observing the downstream situation, it is found that there are more users of modules in the downstream. So fix option 2 is recommended.

For this issue, Solution 2 can maximize your benefits and with minimal impacts to your downstream projects the ecosystem.

References

Spring Reactive App backed by netty4 requires more direct memory

Hi,
I have a PCF reactive app built on spring-boot-start-webflux 2.0.2 which depends on netty version 4. Deploying this app with 1 GB container size (and using java-build-pack memory calculator) crashes it due to lack of direct memory:

   io.netty.util.internal.OutOfDirectMemoryError: failed to allocate 64 byte(s) of direct memory (used: 10485751, max: 10485760)
   	at io.netty.util.internal.PlatformDependent.incrementMemoryCounter(PlatformDependent.java:640)
   	at io.netty.util.internal.PlatformDependent.allocateDirectNoCleaner(PlatformDependent.java:594)
   	at io.netty.buffer.UnpooledUnsafeNoCleanerDirectByteBuf.allocateDirect(UnpooledUnsafeNoCleanerDirectByteBuf.java:30)
   	at io.netty.buffer.UnpooledUnsafeDirectByteBuf.<init>(UnpooledUnsafeDirectByteBuf.java:68)
   	at io.netty.buffer.UnpooledUnsafeNoCleanerDirectByteBuf.<init>(UnpooledUnsafeNoCleanerDirectByteBuf.java:25)
   	at io.netty.buffer.UnsafeByteBufUtil.newUnsafeDirectByteBuf(UnsafeByteBufUtil.java:625)
   	at io.netty.buffer.PooledByteBufAllocator.newDirectBuffer(PooledByteBufAllocator.java:327)
   	at io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:185)
   	at io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:176)
   	at io.netty.buffer.AbstractByteBufAllocator.ioBuffer(AbstractByteBufAllocator.java:137)
   	at io.netty.channel.DefaultMaxMessagesRecvByteBufAllocator$MaxMessageHandle.allocate(DefaultMaxMessagesRecvByteBufAllocator.java:114)
   	at io.netty.channel.epoll.EpollRecvByteAllocatorHandle.allocate(EpollRecvByteAllocatorHandle.java:71)
   	at io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:793)
   	at io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe$1.run(AbstractEpollChannel.java:387)
   	at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:163)
   	at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:404)
   	at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:309)
   	at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:884)
   	at java.lang.Thread.run(Thread.java:748)

I tried setting Direct memory using JAVA_OPTS: '-XX:+PrintFlagsFinal -XX:MaxDirectMemorySize=0m' to let JVM control allocation, however, that settings makes app fail to start.

Questions:

  1. How much direct memory should I set for the reactive apps? any guideline or pointers will be helpful. App seems to survive 400 concurrent sessions with 200 MB direct memory with 200 ms latency.
  2. Should I set Direct memory via JAVA_OPTS or JBP_CONFIG
  3. is there a way to set max direct memory to 0 so that JVM controls allocation as per https://docs.oracle.com/javase/8/docs/technotes/tools/unix/java.html

Thanks

Issues after upgrading to 3.1.0

My app used to work fine before. This used to be the start command:

Start Command: CALCULATED_MEMORY=$($PWD/.java-buildpack/open_jdk_jre/bin/java-buildpack-memory-calculator-2.0.2_RELEASE -memorySizes=metaspace:64m..,stack:228k.. -memoryWeights=heap:65,metaspace:10,native:15,stack:10 -memoryInitials=heap:100%,metaspace:100% -stackThreads=300 -totMemory=$MEMORY_LIMIT) && JAVA_OPTS="-Djava.io.tmpdir=$TMPDIR -XX:OnOutOfMemoryError=$PWD/.java-buildpack/open_jdk_jre/bin/killjava.sh $CALCULATED_MEMORY -Djavax.net.ssl.trustStore=$PWD/.java-buildpack/container_certificate_trust_store/truststore.jks -Djavax.net.ssl.trustStorePassword=java-buildpack-trust-store-password" && SERVER_PORT=$PORT eval exec $PWD/.java-buildpack/open_jdk_jre/bin/java $JAVA_OPTS -cp $PWD/. org.springframework.boot.loader.WarLauncher

How ever after upgrading recently, it is not starting up. Here is the current command

Start Command: CALCULATED_MEMORY=$($PWD/.java-buildpack/open_jdk_jre/bin/java-buildpack-memory-calculator-3.1.0_RELEASE -totMemory=$MEMORY_LIMIT -stackThreads=300 -loadedClasses=15357 -poolType=metaspace) && JAVA_OPTS="-Djava.io.tmpdir=$TMPDIR -XX:OnOutOfMemoryError=$PWD/.java-buildpack/open_jdk_jre/bin/killjava.sh $CALCULATED_MEMORY -Djavax.net.ssl.trustStore=$PWD/.java-buildpack/container_certificate_trust_store/truststore.jks -Djavax.net.ssl.trustStorePassword=java-buildpack-trust-store-password" && SERVER_PORT=$PORT eval exec $PWD/.java-buildpack/open_jdk_jre/bin/java $JAVA_OPTS -cp $PWD/. org.springframework.boot.loader.WarLauncher

Here is the log:
Start Command: CALCULATED_MEMORY=$($PWD/.java-buildpack/open_jdk_jre/bin/java-buildpack-memory-calculator-3.1.0_RELEASE -totMemory=$MEMORY_LIMIT -stackThreads=300 -loadedClasses=15357 -poolType=metaspace) && JAVA_OPTS="-Djava.io.tmpdir=$TMPDIR -XX:OnOutOfMemoryError=$PWD/.java-buildpack/open_jdk_jre/bin/killjava.sh $CALCULATED_MEMORY -Djavax.net.ssl.trustStore=$PWD/.java-buildpack/container_certificate_trust_store/truststore.jks -Djavax.net.ssl.trustStorePassword=java-buildpack-trust-store-password" && SERVER_PORT=$PORT eval exec $PWD/.java-buildpack/open_jdk_jre/bin/java $JAVA_OPTS -cp $PWD/. org.springframework.boot.loader.WarLauncher

I am new to cf, is there a way to go back to previous version?

Xms (minimum heap) is missing in new memory caluculator which is causing havoc in our applications..

We are in the midst of migrating from cflinuxfs2 to cflinuxfs3 along with java buildpack (4.28). It seems new buildpack is not setting Xms which is causing several runtime issues with our production applications..Some applications keep crashing, few apps doubled their CPU usage due to consistent major Garbage Collections and few apps started fine but with corrupt state.
Manually setting Xms is also tricky since Xmx (Max Heap) is determined by buildpack and we have to set this with a value lower or equal than that of Xmx.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.