Skip to content

Troubleshooting memory/JVM usage

Why do that

Memory (RAM) is a finite resource which impact the PunchPlatform operations:

  • it limits what can do the Punchplatform (e.g. number of running topologies)
  • it limits the PunchPlatform performance (e.g. not enough RAM leading to cache issues)

How to investigate

We detail here what we can do to understand memory usage in a given node.

Java process

Heap size

The Java Virtual Machine (JVM) heap is the area of memory used for dynamic memory allocation.

The following Java command line parameters control Java heap allocation:

  • minimum allocation size: -Xms<heap size> (e.g. -Xms256m for a 256 mega)
  • maximum allocation size: -Xmx<heap size> (e.g. -Xmx1g for a 1024 mega)

These parameters are used at the application launch and they work with any Java application. Here is a usage example:

java -Xms128m -Xmx256m -cp /path/a:/path/b

To know what is your configuration current default heap size (when -Xms is not specified), use this command:

java -XX:+PrintFlagsFinal -version | grep HeapSize

It is a good practice to set the minimum heap allocation size (-Xms) at the maximum heap allocation size (-Xmx) to allocate all memory at the jvm start.

When multiple -Xms or -Xms are specified, only the last specified value is used. This is an undocumented feature of the JVM.


To investigate memory usage of a Java program, multiple solution are available.

Before going further, keep in mind that Java creates a huge virtual memory that allows an efficient access to all jar files. This does not correspond to allocated memory, and is mostly shared between all JVMs sharing the same class path. What is more significant are:

  • VmHWM: Peak resident set size
  • VmRSS: Resident set size
Java Console

If you are working on a host with a graphical interface or if you are able to execute and forward X11, the jconsole command is the starting point.

Ensure Java is installed on your host, the jconsole should be available as it is bring by Java itself. To use it, start by finding the process ID of Java application you would like to monitor and then run:

jconsole <pid>

This will open up a GUI application with a lot of charts representing memory and CPU consumption, threads, etc.

Java Visualvm

An alternative to the Java Console is the Java VisualVM. It can be used in the same conditions with a slightly more friendly used interface.

This tool is not part of the Java binaries, please refer to the official VisualVM documentation for installation details.

When you are setup, run the application. A graphical interface should show up. You will find on the left menu all the Java application running on your host. Double click on the desired one to show how it goes.

Java heap dump

When working on a remote server, it is not handy to use the tools describe above. A better option is to get a JVM heap dump, which is a .hprof binary file, that can be analysed from a host with a graphical interface.

To do so, you can use the command line tool called jcmd. It is a very rich and powerful tool to help debugging any Java application. One of this feature can help us to get our dump, run:

jcmd <pid> GC.heap_dump <output-file-path>

# for example, on a real process
jcmd 29890 GC.heap_dump /tmp/my-test-dump.hprof
Heap dump file created

Now, transfer this file to a computer with VisualVM installed and import it with the "File > Load" menu.


When using a relative path, the current directory is the one use to initially launch the Java process. To avoid generating the file into an unwanted location, always prefer an absolute path usage.


Keep in mind that this procedure is quite heavy and can impact performance during its execution.

Another way to generate JVM dumps but with no performance issue, is to only dump it if an "Out Of Memory" error is thrown. To enable this behaviour, ensure this parameter is passed to the JVM at launch time:

java -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=<output-path> ...
Java threads dump

If you are working on a server without any way of exporting .hprof files, a good starting option of to dump the current JVM threads stack.

To do so, get the JVM process ID and run :

jcmd <pid> Thread.print

# On real-life example, here is a sample output
jcmd 29890 Thread.print
2019-04-03 13:49:37
Full thread dump OpenJDK 64-Bit Server VM (25.191-b12 mixed mode):

"RMI Scheduler(0)" #31 daemon prio=9 os_prio=0 tid=0x00007f6d6400a800 nid=0x750d waiting on condition [0x00007f6de0690000]
   java.lang.Thread.State: TIMED_WAITING (parking)
    at sun.misc.Unsafe.park(Native Method)
    - parking to wait for  <0x00000000e19d7478> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
    at java.util.concurrent.locks.LockSupport.parkNanos(
    at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(
    at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(
    at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(
    at java.util.concurrent.ThreadPoolExecutor.getTask(
    at java.util.concurrent.ThreadPoolExecutor.runWorker(
    at java.util.concurrent.ThreadPoolExecutor$

"RMI TCP Accept-0" #29 daemon prio=9 os_prio=0 tid=0x00007f6d6c017800 nid=0x7509 runnable [0x00007f6de0892000]
   java.lang.Thread.State: RUNNABLE
    at Method)
    at sun.rmi.transport.tcp.TCPTransport$AcceptLoop.executeAcceptLoop(
    at sun.rmi.transport.tcp.TCPTransport$



The simpliest method to check global RAM usage is via /proc/meminfo. This dynamically updated virtual file is actually the source of information displayed by many other memory related tools such as free, top and ps tools. From the amount of available/free physical memory to the amount of buffer waiting to be or being written back to disk, /proc/meminfo has everything you want to know about system memory usage.

cat /proc/meminfo 
   MemTotal:        8042864 kB
   MemFree:          126524 kB
   MemAvailable:      91196 kB
   Buffers:            3804 kB
   Cached:           195584 kB
   SwapCached:       143020 kB
   Active:           403216 kB
   Inactive:         388652 kB
   Active(anon):     348516 kB
   Inactive(anon):   347760 kB
   Active(file):      54700 kB
   Inactive(file):    40892 kB
   Unevictable:        6524 kB
   Mlocked:            6524 kB
   SwapTotal:       8254460 kB
   SwapFree:        4796476 kB
   Dirty:               188 kB
   Writeback:             0 kB
   AnonPages:        591632 kB
   Mapped:          6868980 kB
   Shmem:            100248 kB
   Slab:             216812 kB
   SReclaimable:     114056 kB
   SUnreclaim:       102756 kB
   KernelStack:       12560 kB
   PageTables:        66404 kB
   NFS_Unstable:          0 kB
   Bounce:                0 kB
   WritebackTmp:          0 kB
   CommitLimit:    12275892 kB
   Committed_AS:   15649752 kB
   VmallocTotal:   34359738367 kB
   VmallocUsed:           0 kB
   VmallocChunk:          0 kB
   HardwareCorrupted:     0 kB
   AnonHugePages:      4096 kB
   CmaTotal:              0 kB
   CmaFree:               0 kB
   HugePages_Total:       0
   HugePages_Free:        0
   HugePages_Rsvd:        0
   HugePages_Surp:        0
   Hugepagesize:       2048 kB
   DirectMap4k:      359180 kB
   DirectMap2M:     7897088 kB
   DirectMap1G:           0 kB

Process-specific memory information is also available from /proc/<pid>/statm and /proc/<pid>/status. This is documented in the proc man page.

Other tools are available that present these info in a friendlier way:

  • free
  • vmstat
  • top
  • htop
  • memstat -p

When a process creates threads, (see /proc//status ; Treads field):

  • It can reach a system limit and cause an out-of-memory message, which is typically logged.
  • It allocates a stack for the thread.

To see what is your system threads number limit, you can use this command:

sudo sysctl -a | grep kernel.threads-max
kernel.threads-max = 125205