What are some of the most useful tips for keeping your GC overhead low?

Java GC Star Wars

With the upcoming-yet-delayed-once-again release of Java 9, the G1 (“Garbage First”) garbage collector is set to become the default collector of the HotSpot JVM. From the serial garbage collector all the way to the CMS collector, the JVM has seen many GC implementations throughout its lifetime, and the G1 collector is next in line.

As garbage collectors evolve, each generation (no pun intended) brings to the table advancements and improvements over previous ones. The parallel GC that followed the serial collector made garbage collection multithreaded, utilizing the compute capabilities of multi-core machines. The CMS (“Concurrent Mark-Sweep”) collector that followed divided the collection into multiple phases, allowing much of the collection work to be done concurrently while the application threads are running — resulting in much less frequent “stop-the-world” pauses. G1 adds better performance on JVMs with very large heaps, and has much more predictable and uniform pauses.

However advanced GCs get, their achilles’ heel remains the same: redundant and unpredictable object allocations. Here are some quick, applicable, eternal tips that will help you keep your GC overhead at bay, no matter which garbage collector you choose to utilize.

Tip #1: Predict Collection Capacities

All standard Java collections, as well as most custom and extended implementations (such as Trove and Google’s Guava), use underlying arrays (either primitive- or object-based). Since arrays are immutable in size once allocated, adding items to a collection may in many cases cause an old underlying array to be dropped in favor of a larger newly-allocated array.

Most collection implementations try to optimize this re-allocation process and keep it to an amortized minimum, even if the expected size of the collection is not provided. However, the best results can be achieved by providing the collection with its expected size upon construction.

Let’s take the following code as a simple example:

public static List reverse(List < ? extends T > list) {

List result = new ArrayList();

for (int i = list.size() - 1; i > = 0; i--) {

return result;

This method allocates a new array, then fills it up with items from another list, only in reverse order.

The point that could be painful and can be optimized is the line that adds items to the new list. With each addition, the list needs to make sure its underlying array has enough free slots in it to accommodate the new item. If it does, it simply stores the new item in the next free slot. If not, it allocates a new underlying array, copies the old array’s content into the new array, then adds the new item. This results in multiple allocations of arrays, that remain there for the GC to eventually collect.

We can avoid these redundant allocations by letting the array know how many items it’s expected to hold, while constructing it:

public static List reverse(List < ? extends T > list) {

List result = new ArrayList(list.size());

for (int i = list.size() - 1; i > = 0; i--) {

return result;


This makes the initial allocation performed by the ArrayList constructor large enough to hold list.size() items, which means it does not have to reallocate memory during the iteration.

Guava’s collection classes take this a step further, allowing us to initialize collections either with ab exact number of expected items, or an estimate.

List result = Lists.newArrayListWithCapacity(list.size());
List result = Lists.newArrayListWithExpectedSize(list.size());

The former is for cases in which we know exactly how many items the collection is going to hold, while the latter allocates some padding to account for estimation errors.

Tip #2: Process Streams Directly

When processing streams of data, such as data read from files, or data downloaded over the network, for example, it’s very common to see something along the lines of:

byte[] fileData = readFileToByteArray(new File("myfile.txt"));

The resulting byte array could then be parsed into an XML document, JSON object or Protocol Buffer message, to name a few popular options.

When dealing with large files or ones of unpredictable size, this is obviously a bad idea, as it exposes us to OutOfMemoryErrors in case the JVM can’t actually allocate a buffer the size of the whole file.

But even if the size of the data seems manageable, using the above pattern can cause significant overhead when it comes to garbage collection, as it allocates a relatively large blob on the heap to hold the file data.

A better way to approach this is to use the appropriate InputStream (FileInputStream in this case) and feed it directly into the parser, without first reading the whole thing into a byte array. All major libraries expose APIs to parse streams directly, for example:

FileInputStream fis = new FileInputStream(fileName);
MyProtoBufMessage msg = MyProtoBufMessage.parseFrom(fis);

Tip #3: Use Immutable Objects

Immutability has many, many advantages. Don’t even get me started. However, one advantage that’s rarely given the attention it deserves is its effect on garbage collection.

An immutable object is an object whose fields (and specifically non-primitive fields in our case) cannot be modified after the object has been constructed. For example:

public class ObjectPair {

private final Object first;
private final Object second;

public ObjectPair(Object first, Object second) {
this.first = first;
this.second = second;

public Object getFirst() {
return first;

public Object getSecond() {
return second;


Instantiating the above class results in an immutable object — all its fields are marked final and cannot be modified past construction.

Immutability implies that all objects referenced by an immutable container have been created before the construction of the container completes. In GC terms: The container is at least as young as the youngest reference it holds. This means that when performing garbage collection cycles on young generations, the GC can skip immutable objects that lie in older generations, since it knows for sure they cannot reference anything in the generation that’s being collected.

Less objects to scan mean less memory pages to scan, and less memory pages to scan mean shorter GC cycles, which mean shorter GC pauses and better overall throughput.

Tip #4: Be wary of String Concatenation

Strings are probably the most prevalent non-primitive data structure in any JVM-based application. However, their implicit weight and convenience of use make them easy culprits in applications’ large memory footprints.

The problem does not lie with literal strings obviously, as these are inlined and interned, but rather with strings that are allocated and constructed at runtime. Let’s take a look at a quick example of dynamic string construction:

public static String toString(T[] array) {

String result = "[";

for (int i = 0; i < array.length; i++) {
result += (array[i] == array ? "this" : array[i]);
if (i < array.length - 1) {
result += ", ";

result += "]";

return result;

This is a nice little method that takes an array and returns a string representation for it. That is also hell in terms of object allocation.

It’s hard to see past all of this syntactic sugar, but what’s actually going on behind the scenes is this:

public static String toString(T[] array) {

String result = "[";

for (int i = 0; i < array.length; i++) {

StringBuilder sb1 = new StringBuilder(result);
sb1.append(array[i] == array ? "this" : array[i]);
result = sb1.toString();

if (i < array.length - 1) {
StringBuilder sb2 = new StringBuilder(result);
sb2.append(", ");
result = sb2.toString();

StringBuilder sb3 = new StringBuilder(result);
result = sb3.toString();

return result;

Strings are immutable, which means they in themselves do not get modified when concatenation takes place, but rather new strings are allocated in turn. In addition, the compiler utilizes the standard StringBuilder class in order to actually perform these concatenations. This leads to double trouble, as in each iteration of the loop, we get both (1) implicit allocations of interim strings, and (2) implicit allocations of interim StringBuilder objects to help us construct the final result.

The best way to avoid this is to explicitly use StringBuilder and append to it directly, instead of using the somewhat naive concatenation operator (“+”). Here’s what it may look like:

public static String toString(T[] array) {

StringBuilder sb = new StringBuilder("[");

for (int i = 0; i < array.length; i++) {
sb.append(array[i] == array ? "this" : array[i]);
if (i < array.length - 1) {
sb.append(", ");

return sb.toString();

Here, only one StringBuilder is allocated by us at the beginning of the method. From that point on, all strings and list items are appended to that sole StringBuilder, that’s eventually converted only once into a string using its toString method, and returned.

Tip #5: Use Specialized Primitive Collections

Java’s standard collection library is convenient and generic, allowing us to use collections with semi-static type binding. This is fantastic if we want to use, for example, a set of strings (Set), or a map between a pair and a list of strings (Map<Pair, List>).

The real problem begins when we want to hold a list of ints, or a map with values of type double. Since generic types can’t be used with primitives, the alternative is using the boxed types instead, so instead of List, we need to use List.

This is very wasteful, as an Integer is a full-fledged Object, replete with an object header of 12-bytes and an internal 4-byte int field holding its value. This sums up to 16-bytes per Integer item. That’s 4 times the size of a list of primitive ints of the same size! The bigger problem with this, however, is the fact that all these Integers are actually object instances that need to be accounted for during garbage collection.

In order to tackle this problem, we at OverOps use the excellent Trove collection library. Trove gives up some (but not all) generics in favor of specialized memory-efficient primitive collections. For example, instead of the wasteful Map<Integer, Double>, there is a specialized alternative in the form of TIntDoubleMap:

TIntDoubleMap map = new TIntDoubleHashMap();
map.put(5, 7.0);
map.put(-1, 9.999);

Trove’s underlying implementation uses primitive arrays, so no boxing (int -> Integer) or unboxing (Integer -> int) takes place while manipulating collections, and no objects are stored in place of the primitives.

Final Thoughts

As garbage collectors continue to advance, and as run-time optimization and JIT compilers get smarter, we as developers will find ourselves caring less and less about how to write GC-friendly code. However, for the time being, and no matter how advanced G1 may be, there’s still a lot we can do in order to help the JVM.


Stop using log files for debugging your production JVMs – See a live demo

Java 8

Java exceptions have never been so beautiful – Try OverOps for Java

Niv is co-founder and VP R&D at OverOps. In his free time Niv is an avid basketball fan and a student of languages.
  • MrEasy

    Considering tip #1 in #4 would initialize the StringBuilder with a size depending upon array.size.

    • http://www.takipi.com/ Niv Steingarten

      That’s definitely possible.

      The problem with applying #1 to #4 in the above example, is that if we wanted to hint the StringBuilder efficiently, we’d have to first iterate over the array of items, and sum up the total length of all the items. Only then would we be able to instantiate the StringBuilder with the needed capacity. This requires two iterations over the array instead of just one, so one has to consider the trade-off between GC overhead and CPU overhead. StringBuilder handles concatenation fairly well with low amortized cost, so I’m not sure I’d go that far in this case.

      Thanks for the comment!

  • Simon Geard

    Conversely, don’t get too carried away with StringBuilder. It definitely makes sense to use it in a looping construct – but I’ve also seen people using it for assembling static strings (e.g joining the lines of an SQL query), which would have been handled by the compiler if they’d just used simple concatenation.

    • http://www.takipi.com/ Niv Steingarten

      Absolutely, Simon; thanks for the comment.
      For basic string concatenation, especially when dealing with a small constant number for items, leaving it to the compiler to use implicit StringBuilders is perfectly fine. It gets messier when the number of object instantiations becomes significant.

      • Simon Geard

        I’ve seen both extremes over the years… a very fast GC-related server crash from someone using concatenation in a heavily used tight loop… but also a lot of hard-to-read code from developers who’d got the idea that string concatenation was bad for performance and had started using StringBuffer for *everything*…

        • http://www.jooq.org Lukas Eder

          Don’t you wish we finally had multi-line strings and string interpolation in Java…

  • http://www.jooq.org Lukas Eder

    While you’re at it, pass a char to the StringBuilder when you can, rather than a one character String

  • zhanggoo

    I am a bit confused about #3 – Immutability. First of all, the container is not immutable in Effective Java way. But the most importantly, the two properties – _first_ and _second_ are object references. That means _first_ and _second_ can be non-immutable objects. So how can GC know this and take advantage of it? Also you can override it via reflection to change the references as well. GC does not know about this. So how can GC make assumption about Immutablity/Age of object?

  • raupach

    #3 is a very dangerous example. If the Objects passed in the constructor are mutable the whole thing is mutable! Just declaring something final does not make an Object immutable! If you go that route you need a copy constructor and defensive copying in the getters.

    I also doubt that less objects to scan improves GC. Any resources to that?

  • http://bishoy.me/ Bishoy A.

    test comment.

  • unni mana

    I am bit late for the party.

    However, I have one clarification regarding your example.

    If the list is defined empty, it will allocate default size ie 16 and initialize if. So the default size of the list is 16 and when item is inserted into the list, it will check whether the size is exceeded or not by calling its ‘ensureCapacity’ method. This will be called all the the time when an item is inserted into it and the counter will be increased by 1. If this counter greater than default size, it will allocate new memory by calling Arrays.copyOf(…..).

    In this context a new object is not getting created while adding an item to the list.
    Please correct me if I am wrong.

  • Shop A Cop

    #1 and #3 helped me today, GC interval and time dropped significantly. Thanks