Memory management for real-time applications in Java

One of the main advantages of using Java is not to have to worry about disposing objects [1], that is, to let the Java runtime take care of the memory management of Java objects.

This is done by letting the Java runtime garbage collect Java objects that are no longer being used.

Garbage collection is a relatively complicated process. Generally, the Java runtime will traverse the heap, checking for objects that are no longer being referenced by any other objects, and thus can be safely deleted.

However, as garbage collection uses CPU cycles, it may impact the execution of application code. That is, if during the execution of the application code, garbage collection is performed, the application code may take more time to respond. This causes the latency of the user transaction to increase. Even worse, as it is unknown to the user when a garbage collection may occur, the latency increase is unpredictable.

Real-time applications have strict timing requirements, that is, they have to execute application code under some determined, known latency. Thus the unpredictable latency increase that may be caused by the garbage collection becomes a problem.

What are the solutions to this problem? One obvious solution is not to use Java for real-time applications. This is a poor solution. Java brings a lot as a programming language and as a development platform; we should be able to solve this problem in Java.

Another solution is to use a different memory management approach in Java instead of garbage collectors. RTSJ, the Real-Time Specification for Java, defines the concept of immortal memory, and scoped memory. Immortal memory is memory that is never garbage collected; it lives forever until the JVM is brought down. Scoped memory is memory that is allocated and released in chunks. That is, the user explicitly creates a scope of memory where the objects will live and the objects are released when the scope is exited or destroyed. In both cases of immortal memory and scoped memory, there is no need for garbage collection. However, there is a drawback, the onus of managing the memory has again moved back to the user, as it is the case for C/C++ applications. This still seems too high of a price to pay. Can we do better?

Ok, so let’s re-consider garbage collection, the main problem with garbage collection is the unpredictable latency spikes it causes. Can we avoid this unpredictable behavior? Or rather, can we limit (i.e. bind) this unpredictable behavior? Yes, by doing garbage collection more often and consistently, we can bind the maximum latency pause time. This is the approach taken by WLRT. Thus, garbage collection becomes a predictable task, with a known cost, which can be considered and modeled by the real-time developer as needed. And, most importantly, we don’t sacrifice Java’s easy of use.

[1] it should be noted that Java programs can still have memory leaks, in spite of garbage collection or any other memory management technique. For instance, by not removing component objects from containers, when they are no longer needed.

Advertisement

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: