cloudfoundry / java-buildpack-memory-calculator Goto Github PK
View Code? Open in Web Editor NEWCloud Foundry JVM Memory Calculator
License: Apache License 2.0
Cloud Foundry JVM Memory Calculator
License: Apache License 2.0
Hi,
As the new JRE Component (IBM JRE) is added to the java buildpack, We have written a doc which explains about the vm options corresponding to IBM JRE which are not present in the current memory calculator. We would like to update the memory calculator to support IBM JRE by emitting these options.
You can find the document here
fyi - @ashu-mehra @dinogun
Hi,
Can you confirm that option --head-room
represents the "native" memory defined in the document https://docs.google.com/document/d/1vlXBiwRIjwiVcbvUGYMrxx2Aw1RVAtxq3iuZ3UK2vXA/edit#
The terms do not match perfectly between README and the the document Java Buildpack Memory Calculator v3.
Best Regards,
Boyan
I recently read this article which points out that the garbage collector might need to be tuned (XX:ParallelGCThreads
XX:ConcGCThreads
) to the number of CPUs in containers. Any thoughts about including this in the memory calculator?
My app used to work fine before. This used to be the start command:
Start Command: CALCULATED_MEMORY=$($PWD/.java-buildpack/open_jdk_jre/bin/java-buildpack-memory-calculator-2.0.2_RELEASE -memorySizes=metaspace:64m..,stack:228k.. -memoryWeights=heap:65,metaspace:10,native:15,stack:10 -memoryInitials=heap:100%,metaspace:100% -stackThreads=300 -totMemory=$MEMORY_LIMIT) && JAVA_OPTS="-Djava.io.tmpdir=$TMPDIR -XX:OnOutOfMemoryError=$PWD/.java-buildpack/open_jdk_jre/bin/killjava.sh $CALCULATED_MEMORY -Djavax.net.ssl.trustStore=$PWD/.java-buildpack/container_certificate_trust_store/truststore.jks -Djavax.net.ssl.trustStorePassword=java-buildpack-trust-store-password" && SERVER_PORT=$PORT eval exec $PWD/.java-buildpack/open_jdk_jre/bin/java $JAVA_OPTS -cp $PWD/. org.springframework.boot.loader.WarLauncher
How ever after upgrading recently, it is not starting up. Here is the current command
Start Command: CALCULATED_MEMORY=$($PWD/.java-buildpack/open_jdk_jre/bin/java-buildpack-memory-calculator-3.1.0_RELEASE -totMemory=$MEMORY_LIMIT -stackThreads=300 -loadedClasses=15357 -poolType=metaspace) && JAVA_OPTS="-Djava.io.tmpdir=$TMPDIR -XX:OnOutOfMemoryError=$PWD/.java-buildpack/open_jdk_jre/bin/killjava.sh $CALCULATED_MEMORY -Djavax.net.ssl.trustStore=$PWD/.java-buildpack/container_certificate_trust_store/truststore.jks -Djavax.net.ssl.trustStorePassword=java-buildpack-trust-store-password" && SERVER_PORT=$PORT eval exec $PWD/.java-buildpack/open_jdk_jre/bin/java $JAVA_OPTS -cp $PWD/. org.springframework.boot.loader.WarLauncher
Here is the log:
Start Command: CALCULATED_MEMORY=$($PWD/.java-buildpack/open_jdk_jre/bin/java-buildpack-memory-calculator-3.1.0_RELEASE -totMemory=$MEMORY_LIMIT -stackThreads=300 -loadedClasses=15357 -poolType=metaspace) && JAVA_OPTS="-Djava.io.tmpdir=$TMPDIR -XX:OnOutOfMemoryError=$PWD/.java-buildpack/open_jdk_jre/bin/killjava.sh $CALCULATED_MEMORY -Djavax.net.ssl.trustStore=$PWD/.java-buildpack/container_certificate_trust_store/truststore.jks -Djavax.net.ssl.trustStorePassword=java-buildpack-trust-store-password" && SERVER_PORT=$PORT eval exec $PWD/.java-buildpack/open_jdk_jre/bin/java $JAVA_OPTS -cp $PWD/. org.springframework.boot.loader.WarLauncher
I am new to cf, is there a way to go back to previous version?
Most of them are pretty self-explanatory, but how would one estimate loadedClasses
and stackThreads
properly?
The memory calculator appears to ignore unrecognized command line options. For example:
$ java-buildpack-memory-calculator-linux -totMemory 128m -memorySizes 'metaspace:64m' memoryWeights 'heap:75,metaspace:10,stack:5,native:10'
The allocated Java memory sizes total 0 which is less than 0.8 of the available memory, so configured Java memory sizes may be too small or available memory may be too large
$ echo $?
0
ignores memoryWeights
. It would be better to fail with a suitable message.
A minimal Boot application is using 8000 classes. My typical non-trivial Spring Boot application (https://github.com/mixitconf/mixit) is loading around 10000 classes. Both run fine locally with -Xmx128M
.
I would expect java-buildpack-memory-calculator
to be able to compute a configuration with 256M of total memory, but it is currently not possible.
./java-buildpack-memory-calculator --loaded-class-count 8000 --thread-count 4 --total-memory 256m
allocated memory is greater than 256M available for allocation: -XX:MaxDirectMemorySize=10M, -XX:MaxMetaspaceSize=70312K, -XX:ReservedCodeCacheSize=240M, -Xss1M x 4 threads
The -XX:ReservedCodeCacheSize=240M
looks suspicious to me.
I believe this may be a copy/paste mistake on line 68 of the readme:
"The Spring Cloud Services CLI plugin is Open Source software released under the Apache 2.0 license"
Just curious the reasoning behind setting -XX:MaxDirectMemorySize to 10M by default? It seems the Java default is to set it to max memory. 10M as a default is quite a bit different than max memory.
A couple references:
The memory calculator sometimes issues a warning message (warning, presumably, because the exit status code is 0) but fails to print the memory settings. For example:
$ java-buildpack-memory-calculator-linux -totMemory 128m -memorySizes 'metaspace:64m' memoryWeights 'heap:75,metaspace:10,stack:5,native:10'
The allocated Java memory sizes total 0 which is less than 0.8 of the available memory, so configured Java memory sizes may be too small or available memory may be too large
$ echo $?
0
This should either be a warning, with memory settings printed, or an error with non-zero exit status. The problem with making it an error is if the values passed in are intentional.
Page is this: https://github.com/cloudfoundry/java-buildpack-memory-calculator
Text is this: Java Buildpack Memory Calculator v3
Link is this: https://docs.google.com/document/d/1vlXBiwRIjwiVcbvUGYMrxx2Aw1RVAtxq3iuZ3UK2vXA/edit?usp=sharing
Following the link results in google service telling that the document is no more available.
Hello,
The calculator's README file states that:
For Java 8 and later, the memory calculator sets the maximum metaspace size (-XX:MaxMetaspaceSize) based on the number of classes that will be loaded and sets the reserved code cache size (-XX:ReservedCodeCacheSize) to 240 Mb.
For Java 7, it sets the maximum permanent generation size (-XX:MaxPermSize) based on the number of classes that will be loaded and sets the reserved code cache size (-XX:ReservedCodeCacheSize) to 48 Mb.
And also the linked google document states:
Memory calculator v3 estimates:
- maximum metaspace size
- compressed class space size
based on the number of class files in the application. It sets the following to constants:
- reserved code cache size (48 Mb on Java 7; 240 Mb on Java 8)
- maximum direct memory size (10 Mb)
and then sets the heap size to the remainder of total memory.
But the rationale behind the increase of reserved code cache from 48 Mb to 240 Mb is not explained anywhere. Would it be possible to explain it?
Thanks in advance!
The github.com/cloudfoundry/java-buildpack-memory-calculator
uses Go modules and the current release version is v4
. And it’s module path is "github.com/cloudfoundry/java-buildpack-memory-calculator"
, instead of "github.com/cloudfoundry/java-buildpack-memory-calculator/v4"
. It must comply with the specification of "Releasing Modules for v2 or higher" available in the Modules documentation. Quoting the specification:
A package that has opted in to modules must include the major version in the import path to import any v2+ modules
To preserve import compatibility, the go command requires that modules with major version v2 or later use a module path with that major version as the final element. For example, version v2.0.0 of example.com/m must instead use module path example.com/m/v2.
https://github.com/golang/go/wiki/Modules#releasing-modules-v2-or-higher
GO111MODULE=on, run go get
targeting any version >= v4.0.0 of the cloudfoundry/java-buildpack-memory-calculator
:
$ go get github.com/cloudfoundry/[email protected]
go: finding github.com/cloudfoundry/java-buildpack-memory-calculator v4.1.0
go: finding github.com/cloudfoundry/java-buildpack-memory-calculator v4.1.0
go get github.com/cloudfoundry/[email protected]: github.com/cloudfoundry/[email protected]: invalid version: module contains a go.mod file, so major version must be compatible: should be v0 or v1, not v4
SO anyone using Go modules will not be able to easily use any newer version of cloudfoundry/java-buildpack-memory-calculator
.
This would push them back to not being managed by Go modules (instead of incorrectly using Go modules).
Ensure compatibility for downstream module-aware projects and module-unaware projects projects
Patch the go.mod
file to declare the module path as github.com/cloudfoundry/java-buildpack-memory-calculator/v4
as per the specs. And adjust all internal imports.
The downstream projects might be negatively affected in their building if they are module-unaware (Go versions older than 1.9.7 and 1.10.3; Or use third-party dependency management tools, such as: Dep, glide,govendor…).
If you don't want to break the above repos. This method can provides greater better backwards-compatibility.
Release a v2 or higher module through the major subdirectory strategy: Create a new v4 subdirectory
(github.com/cloudfoundry/java-buildpack-memory-calculator/v4) and place a new go.mod file in that subdirectory. The module path
must end with /v4
. Copy or move the code into the v4 subdirectory. Update import statements
within the module to also use /v4
(import "github.com/cloudfoundry/java-buildpack-memory-calculator/v4/…"). Tag the release with v4.x.y
.
Don’t want to fix it. TIf the he standard rule of go modules conflicts with your development mode. Or not intended to be used as a library and does not make any guarantees about the API. So you can’t comply with the specification of "Releasing Modules for v2 or higher" available in the Modules documentation.
Regardless, since it's against one of the design choices of Go, it'll be a bit of a hack. Instead of go get github.com/cloudfoundry/java-buildpack-memory-calculator@version-tag
, module users need to use this following way to get the cloudfoundry/java-buildpack-memory-calculator
:
(1) Search for the tag
you want (in browser)
(2) Get the commit hash
for the tag
you want
(3) Run go get github.com/cloudfoundry/java-buildpack-memory-calculator@commit-hash
(4) Edit the go.mod file to put a comment about which version you actually used
This will make it difficult for module users to get and upgrade cloudfoundry/java-buildpack-memory-calculator
.
[*] You can see who will be affected here: [1 module users, e.g., wreulicke/emc]
https://github.com/wreulicke/emc/blob/v0.0.1/go.mod#L8
You can make a choice to fix DM issues by balancing your own development schedules/mode against the affects on the downstream projects.By observing the downstream situation, it is found that there are more users of modules in the downstream. So fix option 2 is recommended.
For this issue, Solution 2
can maximize your benefits and with minimal impacts to your downstream projects the ecosystem.
I have a very small application, and the ReservedCodeCacheSize of 240M is not appropriate and stealing the heap that I need. You can configure some things with the Environment variable, but I didn't see any way to tweak this parameter, or change the Java Parameters directly.
Applications that heavily use NIO, like some Netty-based ones, sometimes benefit from more direct memory as the overall memory available to the JVM increases. This is because direct memory does not have, in itself, garbage collection, but rather it relies on APIs like Cleaner or other phantom reference-based ones to free direct memory as a consequence of evicting objects from heap (like ByteBuffer
s).
We actually have one such use-case in the Instana Agent, which is a Netty-based application that uses significantly more direct memory than most Java apps. When the Instana Agent runs in a container, we usually want to allocate roughly 1MB to direct memory for each 3MB to the heap, although heavy usage of some functionality skews that ratio one way or the other.
We have forked the memory calculator for tweaking it to our needs, but we'd be happy to contribute back if this feature is interesting to this project.
The min heap size (aka "-Xms") value is an important value on startup: keeping the default value (128 MB) implies slow startup times when all classes loaded at startup cannot fit in this pre-allocated memory space.
It can even be the main cause of OutOfMemory errors if the growth rate is higher than the JVM/container ability to allocate more memory on-demand.
Could you try to create some heuristics to compute the min heap size according to the number of classes to load ?
Without access to a built memory calculator to get the usage instructions the source code has to be read to know what the possible usage flags are. Even better, paste the command line usage instructions in to the README file.
Tested this against the excellent JBP 4.3 release.
I would like to provide default memory configuration in my java_opts.yml
and allow users to override that using the JAVA_OPTS
app environment variable.
It appears Java takes the last of a duplicated parameter but the memory calculator appears to take the first.
java -Xss1M -Xss256k -XX:+PrintFlagsFinal | grep "intx ThreadStackSize"
intx ThreadStackSize := 256 {pd product}
It appears in the example below that memory calculator prefers the first value. JBP appears to correctly place the environment JAVA_OPTS
at the end of the start command. However, the memory calculator will think the application can use more heap than it really can.
cf set-env testapp JBP_CONFIG_JAVA_OPTS '{ java_opts: "-Xss256k" }'
cf set-env testapp JAVA_OPTS "-Xss1M"
We've been struggling with a lot of "Direct Memory Buffer" errors in some of our kafka-based spring cloud stream applications, running on PCF.
We noticed it mostly occurs while a single consumer is assigned multiple topic partitions.
018-02-05T18:26:30.178-06:00 [APP/PROC/WEB/4] [OUT] java.lang.OutOfMemoryError: Direct buffer memory
2018-02-05T18:26:30.179-06:00 [APP/PROC/WEB/4] [OUT] at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:71) ~[kafka-clients-0.10.1.1.jar!/:na]
2018-02-05T18:26:30.179-06:00 [APP/PROC/WEB/4] [OUT] at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:154) ~[kafka-clients-0.10.1.1.jar!/:na]
2018-02-05T18:26:30.179-06:00 [APP/PROC/WEB/4] [OUT] at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:135) ~[kafka-clients-0.10.1.1.jar!/:na]
2018-02-05T18:26:30.179-06:00 [APP/PROC/WEB/4] [OUT] at org.apache.kafka.common.network.Selector.poll(Selector.java:291) ~[kafka-clients-0.10.1.1.jar!/:na]
2018-02-05T18:26:30.179-06:00 [APP/PROC/WEB/4] [OUT] at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:232) ~[kafka-clients-0.10.1.1.jar!/:na]
2018-02-05T18:26:30.179-06:00 [APP/PROC/WEB/4] [OUT] at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:979) ~[kafka-clients-0.10.1.1.jar!/:na]
I believe this issue is related to the fact that the build pack assigns only 10 megabyes to the direct memory buffer, hardcoded.
With kafka using nio for all its processing, this can be the reason for our error.
I believe the buildpack should calculate the value for XX:MaxDirectMemorySize based on additional parameters, and if spring-cloud-stream-kafka-binder is in the classpath, it should increase the value.
We want to use this memory calculator as part of the launch script in Arm64 docker images.
Hi there,
I'm using the openjdk buildpack with my CF application. I set the CF container to 512MB an wondered why my JVM max heap is set to ~100MB any immediately runs out of memory...
I now checked the memory calculation and saw, that ReservedCodeCache is assumed with JVM default of 240MB (as described here: https://github.com/cloudfoundry/java-buildpack-memory-calculator/blob/master/README.md).
But:
https://docs.oracle.com/javase/8/embedded/develop-apps-platforms/codecache.htm
The default is either 32MB or 48MB, but not 240MB.
So, where do the 240MB come from and why is this very high value assumed as default?
Hi,
I have a PCF reactive app built on spring-boot-start-webflux 2.0.2 which depends on netty version 4. Deploying this app with 1 GB container size (and using java-build-pack memory calculator) crashes it due to lack of direct memory:
io.netty.util.internal.OutOfDirectMemoryError: failed to allocate 64 byte(s) of direct memory (used: 10485751, max: 10485760)
at io.netty.util.internal.PlatformDependent.incrementMemoryCounter(PlatformDependent.java:640)
at io.netty.util.internal.PlatformDependent.allocateDirectNoCleaner(PlatformDependent.java:594)
at io.netty.buffer.UnpooledUnsafeNoCleanerDirectByteBuf.allocateDirect(UnpooledUnsafeNoCleanerDirectByteBuf.java:30)
at io.netty.buffer.UnpooledUnsafeDirectByteBuf.<init>(UnpooledUnsafeDirectByteBuf.java:68)
at io.netty.buffer.UnpooledUnsafeNoCleanerDirectByteBuf.<init>(UnpooledUnsafeNoCleanerDirectByteBuf.java:25)
at io.netty.buffer.UnsafeByteBufUtil.newUnsafeDirectByteBuf(UnsafeByteBufUtil.java:625)
at io.netty.buffer.PooledByteBufAllocator.newDirectBuffer(PooledByteBufAllocator.java:327)
at io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:185)
at io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:176)
at io.netty.buffer.AbstractByteBufAllocator.ioBuffer(AbstractByteBufAllocator.java:137)
at io.netty.channel.DefaultMaxMessagesRecvByteBufAllocator$MaxMessageHandle.allocate(DefaultMaxMessagesRecvByteBufAllocator.java:114)
at io.netty.channel.epoll.EpollRecvByteAllocatorHandle.allocate(EpollRecvByteAllocatorHandle.java:71)
at io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:793)
at io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe$1.run(AbstractEpollChannel.java:387)
at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:163)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:404)
at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:309)
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:884)
at java.lang.Thread.run(Thread.java:748)
I tried setting Direct memory using JAVA_OPTS: '-XX:+PrintFlagsFinal -XX:MaxDirectMemorySize=0m' to let JVM control allocation, however, that settings makes app fail to start.
Questions:
Thanks
I've put the sample app here: https://github.com/berndgoetz/cf-jbp-memcalc-test
The manifest defines 768M of memory for the container. My expectation is that
around 75% or more of that memory is given to the JRE as max heap space, hence around 576M.
But what you can see at startup is
2019-04-19 22:39:03 [APP/PROC/WEB/0] OUT JVM Memory Configuration: -Xmx81033K -Xss1M -XX:ReservedCodeCacheSize=240M -XX:MaxDirectMemorySize=10M -XX:MaxMetaspaceSize=193398K
The calculator gives the Java process -Xmx81033K which is about 79M only, about 10%! What happens with the rest of the memory?
The Java Buildpack version is 4.17.2. Memory Calculator version is 3.13.0
From the startup logs:
2019-04-19 22:35:39 [STG/0] OUT Downloading java_buildpack...
2019-04-19 22:36:03 [STG/0] OUT Downloaded java_buildpack (644.2M)
2019-04-19 22:36:03 [STG/0] OUT Cell 697bc229-5736-47b7-8d02-070b04734ebb creating container for instance 0eb8deba-f4f5-4ff3-81ca-8ca4b6b60123
2019-04-19 22:36:04 [STG/0] OUT Cell 697bc229-5736-47b7-8d02-070b04734ebb successfully created container for instance 0eb8deba-f4f5-4ff3-81ca-8ca4b6b60123
2019-04-19 22:36:04 [STG/0] OUT Downloading build artifacts cache...
2019-04-19 22:36:04 [STG/0] OUT Downloading app package...
2019-04-19 22:36:05 [STG/0] OUT Downloaded build artifacts cache (132B)
2019-04-19 22:36:08 [STG/0] OUT Downloaded app package (72.8M)
2019-04-19 22:36:10 [STG/0] OUT �[1m�[31m----->�[0m�[22m �[1m�[34mJava Buildpack�[0m�[22m �[34mv4.17.2�[0m �[34m(offline)�[0m | https://github.com/cloudfoundry/java-buildpack.git#6ce39cf
2019-04-19 22:36:10 [STG/0] OUT �[1m�[31m----->�[0m�[22m Downloading �[1m�[34mJvmkill Agent�[0m�[22m �[34m1.16.0_RELEASE�[0m from https://java-buildpack.cloudfoundry.org/jvmkill/bionic/x86_64/jvmkill-1.16.0_RELEASE.so �[3m�[32m(found in cache)�[0m�[23m
2019-04-19 22:36:10 [STG/0] OUT �[1m�[31m----->�[0m�[22m Downloading �[1m�[34mOpen Jdk JRE�[0m�[22m �[34m11.0.2_09�[0m from https://java-buildpack.cloudfoundry.org/openjdk/bionic/x86_64/openjdk-11.0.2_09.tar.gz �[3m�[32m(found in cache)�[0m�[23m
2019-04-19 22:36:11 [STG/0] OUT Expanding Open Jdk JRE to .java-buildpack/open_jdk_jre �[3m�[32m(1.2s)�[0m�[23m
2019-04-19 22:36:11 [STG/0] OUT JVM DNS caching disabled in lieu of BOSH DNS caching
2019-04-19 22:36:11 [STG/0] OUT �[1m�[31m----->�[0m�[22m Downloading �[1m�[34mOpen JDK Like Memory Calculator�[0m�[22m �[34m3.13.0_RELEASE�[0m from https://java-buildpack.cloudfoundry.org/memory-calculator/bionic/x86_64/memory-calculator-3.13.0_RELEASE.tar.gz �[3m�[32m(found in cache)�[0m�[23m
2019-04-19 22:36:12 [STG/0] OUT Loaded Classes: 30953, Threads: 250
2019-04-19 22:36:12 [STG/0] OUT �[1m�[31m----->�[0m�[22m Downloading �[1m�[34mClient Certificate Mapper�[0m�[22m �[34m1.8.0_RELEASE�[0m from https://java-buildpack.cloudfoundry.org/client-certificate-mapper/client-certificate-mapper-1.8.0_RELEASE.jar �[3m�[32m(found in cache)�[0m�[23m
2019-04-19 22:36:12 [STG/0] OUT �[1m�[31m----->�[0m�[22m Downloading �[1m�[34mContainer Security Provider�[0m�[22m �[34m1.16.0_RELEASE�[0m from https://java-buildpack.cloudfoundry.org/container-security-provider/container-security-provider-1.16.0_RELEASE.jar �[3m�[32m(found in cache)�[0m�[23m
2019-04-19 22:36:12 [STG/0] OUT �[1m�[31m----->�[0m�[22m Downloading �[1m�[34mSpring Auto Reconfiguration�[0m�[22m �[34m2.5.0_RELEASE�[0m from https://java-buildpack.cloudfoundry.org/auto-reconfiguration/auto-reconfiguration-2.5.0_RELEASE.jar �[3m�[32m(found in cache)�[0m�[23m
2019-04-19 22:36:27 [STG/0] OUT Exit status 0
2019-04-19 22:36:27 [STG/0] OUT Uploading droplet, build artifacts cache...
2019-04-19 22:36:27 [STG/0] OUT Uploading droplet...
2019-04-19 22:36:27 [STG/0] OUT Uploading build artifacts cache...
2019-04-19 22:36:27 [STG/0] OUT Uploaded build artifacts cache (132B)
2019-04-19 22:36:28 [API/5] OUT Creating droplet for app with guid cd96b714-22ae-4af7-8953-ac64d9f2228a
2019-04-19 22:38:34 [STG/0] OUT Uploaded droplet (115.7M)
2019-04-19 22:38:34 [STG/0] OUT Uploading complete
2019-04-19 22:38:34 [STG/0] OUT Cell 697bc229-5736-47b7-8d02-070b04734ebb stopping instance 0eb8deba-f4f5-4ff3-81ca-8ca4b6b60123
2019-04-19 22:38:34 [STG/0] OUT Cell 697bc229-5736-47b7-8d02-070b04734ebb destroying container for instance 0eb8deba-f4f5-4ff3-81ca-8ca4b6b60123
2019-04-19 22:38:34 [STG/0] OUT Cell 697bc229-5736-47b7-8d02-070b04734ebb successfully destroyed container for instance 0eb8deba-f4f5-4ff3-81ca-8ca4b6b60123
2019-04-19 22:38:35 [CELL/0] OUT Cell b3b3fcd1-d055-498f-993f-4d8d1b66b60b creating container for instance 96c95427-c6ca-486a-4400-6f43
2019-04-19 22:38:35 [CELL/0] OUT Cell b3b3fcd1-d055-498f-993f-4d8d1b66b60b successfully created container for instance 96c95427-c6ca-486a-4400-6f43
2019-04-19 22:39:03 [CELL/0] OUT Starting health monitoring of container
2019-04-19 22:39:03 [APP/PROC/WEB/0] OUT JVM Memory Configuration: -Xmx81033K -Xss1M -XX:ReservedCodeCacheSize=240M -XX:MaxDirectMemorySize=10M -XX:MaxMetaspaceSize=193398K
I could of course fix the heap size to e.g. -Xmx512M in the manifest but that would be bad if I would like to change the memory dynamically.
Any explanation of what is happening here?
We run our app on pivotal's CF which runs memory calculator 3.9.0_RELEASE
.
Our JVM memory was 300MB less than our container limit, so I ran the memory calculator manually and realised that the -stackThreads
argument affects the -Xmx
value.
Based on the docs it should decrease the heap with the calculated stack and increase stack size BUT the stack value remains -Xss1M
no matter what the stackThread
param is:
$ ./java-buildpack-memory-calculator-3.9.0_RELEASE -totMemory=1280M -loadedClasses=27000 -stackThreads=1 -poolType=metaspace -vmOptions=""
-XX:ReservedCodeCacheSize=240M -XX:CompressedClassSpaceSize=26269K -XX:MaxDirectMemorySize=10M -XX:MaxMetaspaceSize=166601K -Xss1M -Xmx860824K
$ ./java-buildpack-memory-calculator-3.9.0_RELEASE -totMemory=1280M -loadedClasses=27000 -stackThreads=300 -poolType=metaspace -vmOptions=""
-XX:ReservedCodeCacheSize=240M -XX:CompressedClassSpaceSize=26269K -XX:MaxDirectMemorySize=10M -XX:MaxMetaspaceSize=166601K -Xss1M -Xmx554648K
I see two problems:
memory/allocator.go
, the calculateMaxHeapSize
method never sets the stack size just decreases the heap = we lose memory.Does the memory calculator take into account the overhead memory usage of frameworks such as Dynatrace OneAgent, integrated with the java buildpack? If yes, what memory(memory region in JVM or headroom) is used by such frameworks - so as to be able to tweak that memory's settings?
The document Java Buildpack Memory Calculator v3 linked in project's readme contains the following formula for estimating metaspace:
5400 * (number of loaded casses) + 7000000
This appears to be out of sync with:
java-buildpack-memory-calculator/calculator/calculator.go
Lines 103 to 105 in beef546
Thanks for this solution!
Maybe you can use a Github release and/or goreleaser.
The use case is: starting a container in Kubernetes / docker-compose
it will be nice to have a bash script running java -cp ...
and appending the options generated by this app, just by curl
ing for the binary and using it.
It looks like the memory calculator considers a default stack size of 1M for the calculation, if -Xss
is not set in JAVA_OPTS manually by developers. However, if this default value is used, no -Xss
flag is ever given to the JVM which leads to the default stack size being used by the JVM.
It's probably a minor issue since most JVMs use less than or exactly 1M as stack size by default (e.g. OpenJDK JRE), but it's a potential inconsistency between the memory calculation and the actual limit taken into account during runtime. Also, when the JVM uses a default stack size less than 1M, e.g. when using a different JRE, the memory calculator considers too much memory for stacks (stack size * stack threads).
Therefore, and for the sake of transparency, I think it would make sense to add -Xss1024k
explicitly in case the memory calculator uses its default value.
What do you think?
It would be good to be able to set a max total to cater for retarded platforms like Azure App Service where you cannot actually set a limit for the Linux containers. So instead of detecting the underlying platform max, I'd like to set some var MAX_MEM=1g, and let the calculator do it's calculation based on this max.
There is https://learn.microsoft.com/en-us/azure/app-service/configure-custom-container?pivots=container-windows#customize-container-memory - but it only works for windows containers.
I have a Spring Boot project and a docker image build by mvn spring-boot:build-image
. When I run the image with docker run
(with just a port parameter), I can see a warning:
Setting Active Processor Count to 8
WARNING: Container memory limit unset. Configuring JVM for 1G container.
Calculated JVM Memory Configuration: -XX:MaxDirectMemorySize=10M -Xmx192904K -XX:MaxMetaspaceSize=343671K -XX:ReservedCodeCacheSize=240M -Xss1M (Total Memory: 1G, Thread Count: 250, Loaded Class Count: 58262, Headroom: 0%)
When I run docker stats
I get this:
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
2c14517cf3e1 laughing_albattani 0.17% 528.2MiB / 5.966GiB 8.65% 1.26kB / 0B 0B / 0B 48
So why it configures JVM just for 1GB? And Xmx for only 200MB? I would expect much higher values.
It's not always possible to install everything you want on a computer. Can you provide the binary for Windows and MacOS directly from the Github page? By example: like the Linux version, in the "release" section.
Or maybe make a full web version of this tool ?
We are in the midst of migrating from cflinuxfs2 to cflinuxfs3 along with java buildpack (4.28). It seems new buildpack is not setting Xms which is causing several runtime issues with our production applications..Some applications keep crashing, few apps doubled their CPU usage due to consistent major Garbage Collections and few apps started fine but with corrupt state.
Manually setting Xms is also tricky since Xmx (Max Heap) is determined by buildpack and we have to set this with a value lower or equal than that of Xmx.
I'm trying to deploy a spring boot app on Pivotal Cloud Foundry to use spring kafka to send a message to kafka and getting below error with 1GB memory in manifest.yml. My sprint boot app has just one Rest Controller with only one endpoint to send a message. The app is starting without any issue if I change the memory to 2GB in my manifest.yml and it's taking almost 1.3 GB - 1.8 GB to run the app. I'm wondering why this simple app is taking so much more. Is it because of Kafka ?
In our app, we are using org.apache.kafka.clients.admin.KafkaAdminClient
We are using Spring 2.x, java 8 and buildpack 3.9 with this app.
[APP/PROC/WEB/0] OUT # java.lang.OutOfMemoryError: Java heap space
{APP/PROC/WEB/0] OUT # -XX:OnOutOfMemoryError="/home/vcap/app/.java-buildpack/open_jdk_jre/bin/killjava.sh"
Any help to understand this issue would be greatly appreciated.
In applications that should optimise for minor garbage collection (which is most of the cloud-based one in my experience), it is useful to set the -Xmn
to fix how much memory can be allocated in the heap for young generation. This parameter usually makes sense a fraction of the total heap memory, so that allocating more memory to the surrounding container lets the memory allocated to young generation expand linearly with the total memory available to the heap.
We have implemented this feature in the forked the memory calculator used by the Instana Agent, but we'd be happy to contribute back if this feature is interesting to this project.
Hi,
I'm incredibly grateful for this project, but I encountered an unexpected problem yesterday. I was deploying a container with settings following this project. But the container kept going OOM. When observing the JVM process, there was a difference of 300MB to 400MB between the rrs value reported by ps
and the sum of JVM memory areas, on a 3GB heap with Java 11. All limits were enforced.
From what I can tell, this was memory used by the garbage collector G1. This difference was significantly smaller for parallel and serial GCs, with a minimum of 50MB to 100MB for parallel GC. It seems that the GC does not limit itself, or simply cannot work with less than these values.
I understand that it is very difficult, maybe impossible, to predict these values. But maybe it's possible to at least gather data on this for documentation purposes. Right now, even using all of the heuristics in this project, I still have to guess for the right amount of headroom in a trial-and-error process.
Log lines do not have newlines and this results in one of the following:
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.