Josh Neidich

By Josh Neidich

The Problem: Poor Server Performance from Inefficient Java Programs

I inherited a Java program for a major OEM that was written by another developer. It was pretty daunting due to my being new to the Java language and the programmer being advanced. Also, the application was doing a lot of sophisticated computations. I needed to try to understand it. This was my first hurdle.

After understanding it, I needed to find a way to run the program so I could begin to develop it. Because I was working over a VPN, the program wouldn’t run to completion locally as it was taking in excess of 25 hours to run. Without being able to fully run the application on my side, I was having trouble understanding the program. If you can’t see what the program is doing, you can’t work with it.  The side benefit of focusing on making the application more efficient would be to improve the performance on the server.

For context, this program was run weekly on the server and was previously taking between 7 and 8 hours to complete. By making the program efficient enough to run locally over a VPN, I’d be able to improve server-side performance by following the steps I will outline below. Eventually, I was able to get this to run in approximately 18 minutes. The following article will discuss how I approached the problem, how I narrowed down areas for improvement, the actual fix, and when to know to stop investing time.

Understanding the Java Project Structure

The structure of the project involves 3 Java projects and a front-end framework. The Java projects are comprised of:

  • An API layer
  • A common components project
  • The ETL (for computations)

The API layer serves the purpose of the front-end application. The common components project exists for code reuse so we don’t have to repeat code in more than one location; some of the classes in this project are needed by both the API layer and the ETL project. This is an intelligent design for its code reuse but adds a bit of complexity when coming to a new project and trying to familiarize yourself with the structure. The ETL is the main focus of my improvement here.

Adding JSDOC Comments – Documenting the Code

My first step in understanding the code was to document the code. Speaking of documentation, I have attached an example of how to make a self-driving car that won’t crash 😉.

/**
* A self driving car that won't crash
*/
public class SelfDrivingCar {

     /**
     * A fail proof method of driving
     *
     * @param goingToCrash A bool to indicate an obstacle
     */
     public void drive (Boolean goingToCrash) {
               if (goingToCrash) {
                    dontCrash();
               }
          }

          /**
          * Used to tell the car not to crash
          */
          public void dontCrash () {
          System.out.println("Don't Crash.");
          }

     }

 

Joking aside, while documenting the code may seem obvious, adding JSDOC comments was helpful while hovering over a method being called from another class. For example, when in class B, calling a method from class A, I can visualize the method’s purpose. By carefully reviewing the existing methods and documenting them, I was able to create a complete picture of the code.

Refactoring the Code with Smaller Java Files

The next step I took was to refactor the code. I had some Java that was in excess of 2 – 300 lines of code. This was on top of invoking methods from other classes, so those hundreds of lines were significantly dense. Now that I could follow the code’s logic by reviewing the comments I had added above, I broke the code into smaller java files and classes to make it easier to follow the flow sequentially. I extracted similar related methods into smaller utility classes.

Mapping the Code

Now that the code was broken into smaller, more purpose-built files, I was able to follow the program’s flow better. I created a document to note the flow of the application as it was a linear process that was carried out on a weekly basis. I used this flow to help me identify areas to focus on for optimizing such as the caching of data.

The SequenceDiagram plugin in IntelliJ is also great for this task SequenceDiagram – IntelliJ IDEs Plugin | Marketplace (jetbrains.com).

 

Identifying a Limitation of Data

Now that I had a good understanding of the sequence of events, I needed to be able to watch the data flow inside the program to try to identify any problem areas. This was challenging because the program wouldn’t complete due to the daily resetting of the VPN.

To counteract this, I tried to run the program all the way through, only this time during testing, limiting the amount of data that was being processed. Instead of the normal volume, I added a limit of only one item to be processed from start to finish. I did this by identifying the method that starts the process and limiting the amount of data that gets passed to it for processing. This way, instead of processing hundreds of items, I only had to worry about one.

Now that I was able to run the program, I tried to add logging throughout the application to see the data as it moved through. This quickly proved to be overwhelming. The log on the server generated over 20 GB of data, and the running command window kept overwriting data as it reached its limits. More data did not equal more clarity. I needed a better way.

 

Calculating the Speed of Different Sections

With the staggering amount of information being processed, there was no way that I could wade through all the information dumped by my logging. Instead, I decided to time the execution of various methods so I could focus my efforts on the real performance blockers. At the beginning of a method, I’d declare a variable like

long startTime = System.currentTimeMillis();

 

At the end of the method, I’d declare another variable like

long endTime = System.currentTimeMillis();

 

I would then log

System.out.println("Class and method name took " + (startTime - endTime) + " milliseconds to complete");

 

If there was an array involved, I’d also log out the amount of information processed in any loop used so I could see which sections of the application were involved if problems were identified and whether quantity played a role.

 

The Fix

Now that I was able to see the relative performance of various areas of the application, I noticed two areas of concern for improvement.

  • The first was the retrieval of information from the database as there were a lot of small calls executed.
  • The second was saving the results of calculations performed to the database.

Caching Data

To alleviate the performance gaps identified by the first area of concern, I looked to see what information was being retrieved from the database and combined SQL calls into a larger cache that would be executed at the start of the program. This meant joining a couple of queries so that the information could be retrieved locally rather than re-fetched from the database. The results of caching were approximately a 30% speed improvement. Also, when searching through these cached results, I switched to using “.parallelStream” from the interface to speed up data filtering. This was a promising start, but we still had a ways to go.

Batch Processing to the Database

This is the area that was taking the longest amount of time to execute. It was taking approximately 2.5 minutes to save 1,000 rows over the VPN. I had over 800,000 rows to write, so this wasn’t going to cut it.  While researching how to improve this performance, it turns out that batch processing needed to be enabled for Hibernate.

A good tutorial on this can be found here: Batch Processing with Hibernate/JPA – HowToDoInJava.

When trying to implement the solution I found, I ran into a known issue where the id generation property specified in the Entity can disable the batch processing. I spent a lot of time trying various combinations of configuration and changing the Entity id generation strategy and the performance wasn’t improving. I was coming up against a roadblock in trying to tune the performance of the main library for interacting with the database. (Hibernate)

I decided to write my own SQL and do a batch insert of 1,000 rows at a time. I wrote a helper class that I called BatchSaveUtil. This class was mainly used for creating the batches to be passed on to the custom SQL I wrote on the DaoImpl class.

If we have an Entity with a definition of:

package com.example.dbbatchsaveutil;

import lombok.Data;

import javax.persistence.*;

@Data
@Table(name = "vehicle")

public class VehicleEntity {
@Id
@GeneratedValue(strategy = GenerationType.SEQUENCE,
generator = "vehicle_id_seq"
)
@SequenceGenerator(
name = "vehicle_id_seq",
allocationSize = 1000
)
private Long id;

@Column(name = "vin")
private String vin;

@Column(name = "make")
private String make;

@Column(name = "model")
private String model;

@Column(name = "trim")
private String trim;

}

I created a DaoImpl class with the following structure. I left out the id as the database was set with a default value, so less information needed to be transferred back and forth.

public void saveAll(List<VehicleEntity> vehicles ) {
StringBuilder sql = new StringBuilder();
sql.append("insert into " + VehicleEntity.class.getAnnotation(Table.class).name()); // keep the table name dynamic in case we want to update later
sql.append("(vin, make, model, trim) values ");

for (int i = 0; i < vehicles.size(); i++) {
VehicleEntity vehicle = vehicles.get(i);
if (i == 0) {
sql.append("(");
} else {
sql.append(", (");
}

sql.append(vehicle.getVin()).append(", ").
append(vehicle.getMake()).append(", ").
append(vehicle.getModel()).append(", ").
append(vehicle.getTrim()).append(")");
}

jdbc.execute(sql.toString());
}

My batch save utility class served to break up the list of items to save to the database into smaller batches. These smaller batches were passed to the above SQL that did the actual saving to the database.

The batch save utility class relied on a (see below):

private static final int BATCH_SIZE = 1000;

//Can also overload to pass in a JDBC
public static void batchSaveAll(List<?> list, CrudRepository repo) {
for (int i = 0; i < list.size(); i = i + BATCH_SIZE) {
if (i + BATCH_SIZE > list.size()) {
List<?> subList = list.subList(i, list.size() - 1);
repo.saveAll(subList);
break;
}
List<?> subList = list.subList(i, i + BATCH_SIZE);
repo.saveAll(subList);
}
}

I could now execute the program in full without having to limit the amount of data being processed. I could test the full application. The 1000 rows were now taking 50 milliseconds to save. I could now save 850,000 rows to the database in less time than 1,000 had previously taken.

Knowing When Enough is Enough

It was a fine start speeding it up by a factor of 25x by improving these two key areas. But, where do you stop?

I had now sped up an application that used to take approximately eight hours to under 20 minutes. While there are a few areas left I think I would still be able to optimize, does it matter for an application that runs once a week on the server? If I shave it down from 20 minutes to 18, I’m not sure that really gets us anywhere. After talking with my managers, I decided this was enough of an optimization to call it complete.