jstack and jmap

When diagnosing performance issues, the JDK/bin tools, jstack and jmap, are two key sources of information to discovering what might be causing problems in your JVM(s). Running the jstack utility will cause each thread to dump its stack and jmap will take a snapshot of your entire heap. The jstack utility is fairly straightforward, whereas a jmap dump provides much more information. I’ll write more on the usefulness of a jmap dumps in the future. However, a comparison of the similar features of jstack and jmap is also worth discussing.

In order to illustrate the output of jstack and jmap, we’ll need to run some sample code. The following snippet is a simple servlet, which outputs the date and time to standard out. I have added methods methodOne, methodTwo, methodThree, and methodFour, which are functionally irrelevant to the program, but will be useful for demonstrating the power of a jmap.

public class SampleServlet extends HttpServlet {

    SimpleDateFormat FORMAT = new SimpleDateFormat("dd-MM-yyyy hh:mm:ss SSS");

    protected  void doGet(HttpServletRequest req, HttpServletResponse resp) throws ServletException, IOException {
        PrintWriter writer = resp.getWriter();

        writer.println("The time is now " + FORMAT.format(new Date()));



    void methodOne(int a) {

    void methodTwo(String b) {

    void methodThree() {
        ThreadLocal local = new ThreadLocal();
        local.set("Discotek.ca rocks!");


    void methodFour() {
        try { Thread.sleep(10 * 1000); }
        catch (Exception e) {
        System.out.println("Waking up...");

The above code is bundled in a war file, which you can download from the resources section at the end of this article. The war should be deployable to any web container. The URL may differ depending on your choice of web container. With JBoss, you can access the app at http://localhost:8080/discotek.sample-webapp/sample. Once the application is invoked, it will hang for ten seconds. This is intentional in order to give us time to run our utilities.

Before we can invoke the jstack and jmap commands, we will need to know the ID of the java process. We can get this information in a number of different ways:

  • On Unix/Linux boxes, we can use the ps command to list the running processes with their IDs. On a Windows box we can use Task Manager‘s Processes tab.
  • The JDK/bin jconsole utility has many features, but will also list the running Java processes with their IDs.
  • The JDK/bin jps command line utility will output a list of all running Java processes with their IDs.

Now that we have the PID for the Java process, let’s start by running the jstack utility. The syntax is:

    jstack [-l] 
        (to connect to running process)

    -l  long listing. Prints additional information about locks
    -h or -help to print this help message

jstack will dump a stack trace for each active thread in the JVM. I am using JBoss to run the sample web application, which has too many threads to display here. Here is the output for the thread we are interested in:

"http--" daemon prio=6 tid=0x35e3f400 nid=0x450 waiting on condition [0x36dcf000]
   java.lang.Thread.State: TIMED_WAITING (sleeping)
        at java.lang.Thread.sleep(Native Method)
        at ca.discotek.sample.webapp.SampleServlet.methodFour(SampleServlet.java:47)
        at ca.discotek.sample.webapp.SampleServlet.methodThree(SampleServlet.java:42)
        at ca.discotek.sample.webapp.SampleServlet.methodTwo(SampleServlet.java:35)
        at ca.discotek.sample.webapp.SampleServlet.methodOne(SampleServlet.java:31)
        at ca.discotek.sample.webapp.SampleServlet.doGet(SampleServlet.java:24)
        at javax.servlet.http.HttpServlet.service(HttpServlet.java:734)
        at javax.servlet.http.HttpServlet.service(HttpServlet.java:847)
        at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:329)
        at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:248)
        at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:275)
        at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:161)
        at org.jboss.as.web.security.SecurityContextAssociationValve.invoke(SecurityContextAssociationValve.java:153)
        at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:155)
        at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
        at org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:567)
        at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
        at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:368)
        at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:877)
        at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:671)
        at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:930)
        at java.lang.Thread.run(Unknown Source)

   Locked ownable synchronizers:
        - None

The java.lang.Thread.State enumeration class provides the states a thread may be in:

A thread can be in one of the following states:

  • NEWA thread that has not yet started is in this state.
  • RUNNABLEA thread executing in the Java virtual machine is in this state.
  • BLOCKEDA thread that is blocked waiting for a monitor lock
    is in this state.
  • WAITINGA thread that is waiting indefinitely for another thread to
    perform a particular action is in this state.
  • TIMED_WAITINGA thread that is waiting for another thread to perform an action
    for up to a specified waiting time is in this state.
  • TERMINATEDA thread that has exited is in this state.

The Thread.sleep(10 * 1000); code causes our thread to be in a TIMED_WAITING state. Knowing the state of each thread is important for identifying many problems including infinite loops and excessive memory consumption/retention. In this example, we have purposely added a poorly performing method to the servlet, so there is nothing to diagnose.

Unfortunately, jstack information is not always useful. Specifically, if you are diagnosing an OutOfMemoryError, the code causing the memory leak may not be running in a thread at the time the jstack was executed. In this case, the stack traces might be misleading to the inexperienced eye.

The jmap utility will produce a heap dump of a given JVM. The jmap process is more heavyweight than jstack. Both utilities cause a “stop the world” pause in the JVM, but a jstack is almost instantaneous. The speed of jstacks is particularly relevant to production servers where responsiveness is critical. A jmap’s speed depends on a couple of factors. It depends on the amount of heap consumed and the complexity of the object graph within the heap. Generally, I expect most admins would not hesistate to take a jstack, but would consider current volume, size of the heap, etc before taking a jmap. Conceivably, under some circumstances, an immediate restart of the server may result in less loss to the business. Of course, the down-time of a single server would be more affordable in a clustered environment.

While jstacks are speedy, the amount and type of data you can get from a jmap heap dump is far superior to that of a jstack. First of all, it is possible to re-create each thread’s stack from a heap dump. Here is the Usage output from the jmap utility:

    jmap -histo 
      (to connect to running process and print histogram of java object heap
    jmap -dump: 
      (to connect to running process and dump java heap)

      format=b     binary default
      file=  dump heap to 

    Example:       jmap -dump:format=b,file=heap.bin

The histo option will only waste your time (and in the same breath, I might as well add that the jhat utility is too). Instead, create heap dumps using the a command similar to the usage example. You can skip the format=b parameter (b is binary which is the default) and I prefer to use a file extension of “.hprof” for the file name, rather than “.bin” as it is more indicative of the file contents, whereas bin could represent any number of file types).

The industry standard tool for analyzing jmap heap dumps is MAT (Memory Analyzer Tool). Alternative tools, like Oracle’s JVisualVM which is bundled with the JDK, are convenient, but simply don’t have the investigative power of MAT. One such example of MAT’s superiority is its ability to re-create each thread’s stack. Assuming you have MAT available, open the heap dump via the File->Open Heap Dump… menu item and select your heap dump file. Once loaded, you may be presented with a Getting Started Wizard, which will generate various reports. Hit cancel, then navigate to the Open Query Browser button’s drop down menu…

…and click through to Java Basics->Thread Overview and Stacks:

Just click the Finish button on the resulting dialog and you’ll be presented with a table similar to the following:

Let’s now get back to comparing jstack and jmap. The above screen shot illustrates how a jmap can provide the same thread name information, but it also provides the thread class name, the thread’s shallow and retained heap (infinitely useful, but isn’t the topic of this blog), the thread’s context classloader, and whether it is a daemon thread. Furthermore, the data is presented in a succinct table, whereas the jstack output is raw text which is more tedious to sift through.

Let’s now drill into a stack. An easy way to find the thread for our example servlet is to recall the name of the thread from the jstack output: “http–″. Here is the stack view:

This should look similar to the jstack output, but there are some subtle differences. The jmap doesn’t provide you with thread lock information. Wait – that statement may not be entirely true. The lock information is probably in the dump, but MAT doesn’t assemble it with the thread information. In any case, other than speed, this is probably the only advantage of jstack over jmap. The other difference is that the jmap provides the full method signatures, whereas jstack only provides the class and method names. It will include line number information, but only if it is available (line number data is optional and can be omitted at compile time). If the full method signature is available, there is no guess work required for overloaded methods if the line number information isn’t available. It should be noted that the method signature is provided using the JVM’s internal type and method descriptors. The descriptors are easy to understand, but explaining them is outside of scope. See this blog on byte code engineering for details.

In addition to the jstack information, you can also examine thread and local variables. The jstack information might help you understand what code is causing the problem, but it may leave you wondering why. The ability to exmaine the stack variables allows you to know the exact data that your methods were operating on. For example, it might answer the question “how could this possibly be an infinite loop?”.

We’ll now take a look at the variables on the stack that were created when the sample servlet was invoked. The doGet method calls methodOne with a parameter value of 2. methodOne names this parameter a. If we expand the methodOne stack element, we should expect to see the local variable a with value 2, but that does not happen:

We only see the SampleServlet object, which is the this object and is always a local variable for all non-static methods. Evidently, MAT is not clever enough to determine primitive local variables. You’ll notice that MAT only exposes objects whether you are looking stacks, histograms, or any other MAT view.

Let’s now take a look at methodTwo which takes an object type, java.lang.String, as a parameter:

As expected, the second local variable is a String of value “Hi”. methodThree declares a ThreadLocal variable, which programmatically makes no sense as this variable will become inaccessible once the method has exited. It only makes sense to declare ThreadLocal variables at class-scope. Placing it in a method was strictly for presentation. In any case, thread local variable values don’t show up in the stack either:

We see the ThreadLocal object, but it references no value because the value is associated with the thread. To find the value, we’ll have to examine the thread’s threadLocals instance variable. I am going to drill into the thread by right-clicking on it and clicking through to the pop-up menu item List object->with outgoing references:

The result will be a table with a single row. Expand the thread node to view a table similar to the following:

I have drawn a box around the last row. This row indicates there are more rows to expand. Right-click on the row and click the bottom menu item, Expand All, from the pop-up menu. Next, scroll toward the bottom of the table until you can see the threadLocalsvariable. Expand that node. Under this node, expand the table variable node. Here are all the ThreadLocal objects for this thread. To find the methodThree ThreadLocal variable, we’ll have to find the java.lang.ThreadLocal$ThreadLocalMap$Entry object with a referent variable whose memory address (0x2b87d20) matches the ThreadLocal instance we are looking for. Or since, we have the source code and know what the value will be, we can look for that instead. Here is the result:

Although we have just scratched the surface of what MAT is capable of, it should be obvious that it has tremendous benefit over jstack. jstack’s strengths are its speed and lock information, but if you can afford it, always choose to take a jmap heap dump to aid in diagnosing JVM problems.

The truth is, I didn’t set out to write an article comparing jstack and jmap. Despite examining thread and heap dumps for years, I was inspired by something I learned just the other day, which I want to pass on. The jmap utility has a -F switch to force a heap dump (not documented in the -help output above. See the jmap documentation for details.). Oracle’s Trouble Shooting Guide for HotSpot VM says:

If the jmap pid command does not respond because of a hung process, the -F option can be used 
(on Solaris OS and Linux only) to force the use of the Serviceability Agent.

This switch sounds like it ought to be useful, but it should be avoided if at all possible. The reality is, the -F switch tells the JVM to use the Serviceability Agent interface for extracting the heap data. This alternate interface is extremely slow. A dump that takes five minutes without the -F switch might take an hour with the -F switch. So, never use the -F switch, right? Conceivably, there might be a circumstance where the JVM is hung in a way that such that the default interface does not work. If getting the heap dump is critical, then it might be worth trying the -F switch. You would have to balance down-time against the value of understanding the root cause of the problem.

I also learned something else that you should be aware of. If the user running the JVM process is different than the user running the jmap process, you may get an error that prevents you from taking the heap dump. In this scenario, using the -F switch may allow you to take the dump, despite the mis-matched users. Once again, only use the -F switch in an emergency production situation. If you find yourself requiring the -F switch to circumvent a user mis-match, consider using your operating system’s facilities to temporarily become another user while taking the dump.

If you enjoyed this article, please let me know in the comments and feel free to follow Discotek.ca on twitter.


This entry was posted in jdk tools, memory leaks and tagged , , , , , . Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *


You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>