Monitoring Spring Boot Applications with Prometheus – Part 2

In my previous post, I describe how to use Prometheus and its JVM client library in a Spring Boot application to gather common JVM metrics. In this blog, I will demonstrate how to write your own application specific metrics using the client library. Prometheus client libraries support 4 core metric types: Counter, Gauge, Histogram and Summary. Its documentation has a brief but clear explanation of each metric types.

I will create 2 metrics to gather statistics about incoming requests to a Spring web application. In particular, a counter to count how many requests the web application handles and a summary for measuring the processing time of the incoming requests.

Application Set Up

To demonstrate the metrics we are going to implement, I have the following controller implementing two endpoints /endpointA and /endpointB. They do nothing except wasting a random amount of time between 0 and 100 ms.

@RestController
public class HomeController {
     private Logger logger = LoggerFactory.getLogger(getClass());
 
     @RequestMapping("/endpointA")
     public void handlerA() throws InterruptedException {
          logger.info("/endpointA");
          Thread.sleep(RandomUtils.nextLong(0, 100));
     }
 
     @RequestMapping("/endpointB")
     public void handlerB() throws InterruptedException {
          logger.info("/endpointB");
          Thread.sleep(RandomUtils.nextLong(0, 100));
     }
}

Counter Metric

A counter is a metric of numeric value that never goes down. Here we add a request handler to keep track of the cumulative number of requests the web application receives:

import io.prometheus.client.Counter;
..
public class RequestCounterInterceptor extends HandlerInterceptorAdapter {

     // @formatter:off
     // Note (1)
     private static final Counter requestTotal = Counter.build()
          .name("http_requests_total")
          .labelNames("method", "handler", "status")
          .help("Http Request Total").register();
     // @formatter:on

     @Override
     public void afterCompletion(HttpServletRequest request, HttpServletResponse response, Object handler, Exception e)
 throws Exception {
          // Update counters
          String handlerLabel = handler.toString();
          // get short form of handler method name
          if (handler instanceof HandlerMethod) {
               Method method = ((HandlerMethod) handler).getMethod();
               handlerLabel = method.getDeclaringClass().getSimpleName() + "." + method.getName();
          }
          // Note (2)
          requestTotal.labels(request.getMethod(), handlerLabel, Integer.toString(response.getStatus())).inc();
     }
}

Note:

  1. We implement a counter using io.prometheus.client.Counter class to keep track of number of incoming requests handled by this handler. The counter is named http_requests_total and consists of a number of labels (method, handler and status). A label is an attribute of a metric which can be used in query to filter and aggregate metrics.
  2. The counter is incremented using the Counter’s inc() method. The values of the metric labels method, handler and status are populated with the request http method (get/post), the spring mvc controller and method, and response http status respectively.

Summary Metric

A summary is a complex metric which track a number of observations as well as their counts. See here for a comprehensive explanation of summary and histogram metrics. Similar to previous section, we implement the metric within a request handler class:

import io.prometheus.client.Summary;
...
public class RequestTimingInterceptor extends HandlerInterceptorAdapter {

      private static final String REQ_PARAM_TIMING = "timing";

      // @formatter:off
      // Note (1)
      private static final Summary responseTimeInMs = Summary
           .build()
           .name("http_response_time_milliseconds")
           .labelNames("method", "handler", "status")
           .help("Request completed time in milliseconds")
           .register();
      // @formatter:on

      @Override
      public boolean preHandle(HttpServletRequest request, HttpServletResponse response, Object handler) throws Exception {
           // Note (2)
           request.setAttribute(REQ_PARAM_TIMING, System.currentTimeMillis());
           return true;
      }

      @Override
      public void afterCompletion(HttpServletRequest request, HttpServletResponse response, Object handler, Exception ex)
 throws Exception {
           Long timingAttr = (Long) request.getAttribute(REQ_PARAM_TIMING);
           long completedTime = System.currentTimeMillis() - timingAttr;
           String handlerLabel = handler.toString();
           // get short form of handler method name
           if (handler instanceof HandlerMethod) {
                Method method = ((HandlerMethod) handler).getMethod();
                handlerLabel = method.getDeclaringClass().getSimpleName() + "." + method.getName();
           }
         // Note (3)
         responseTimeInMs.labels(request.getMethod(), handlerLabel, Integer.toString(response.getStatus())).observe(
 completedTime);
      }
}

Note:

  1. We implement a metric with class io.prometheus.client.Summary. The same set of labels as used in the request counter is used here.
  2. The response time is measured here as time lapsed between the calls to preHandle and afterCompletion methods. This is just for illustration only so I can demonstrate the use of Prometheus client library.
  3. The summary metric is updated by calling the observe() method with the response time value.

Collecting Metrics

Now that we implement the request handler interceptors, we can register them to the test endpoints and start up the Spring Boot web application. Hit the two endpoints using the browser and then go to the Prometheus url, e.g. http://localhost:8080/prometheus, should return something similar to the following:

...
# HELP http_response_time_milliseconds Request completed time in milliseconds
# TYPE http_response_time_milliseconds summary
http_response_time_milliseconds_count{method="GET",handler="HomeController.handlerA",status="200",} 3.0
http_response_time_milliseconds_sum{method="GET",handler="HomeController.handlerA",status="200",} 169.0
http_response_time_milliseconds_count{method="GET",handler="HomeController.handlerB",status="200",} 1.0
http_response_time_milliseconds_sum{method="GET",handler="HomeController.handlerB",status="200",} 59.0
# HELP http_requests_total Http Request Total
# TYPE http_requests_total counter
http_requests_total{method="GET",handler="HomeController.handlerA",status="200",} 3.0
http_requests_total{method="GET",handler="HomeController.handlerB",status="200",} 1.0

Note the summary metric http_response_time_milliseconds actually collects two time series data: _count and _sum. In practice, we don’t require a separate counter metric. It is more to demonstrate how to implement a simple counter with the Prometheus client library.

Also, data for each metric are collected separately for each distinct set of label values. In this example, we have metrics for each individual endpoints given the label values for the label handler is different. Since we also include the response status as a label, we will potentially get other metrics with non 200 status value, for example

http_response_time_milliseconds_count{method="GET",handler="HomeController.handlerA",status="500",} 3.0

Queries in Prometheus

One of the useful features of Prometheus is its powerful query language for manipulating the time series data collected.

For example, the cumulative value of total number of http requests of the counter we implement is not of much use. Typically we would want to get the number of requests a server received per second to get an idea of the system load at various time of the day. This can be achieved by query like the following:

rate(http_requests_total{job="blog", handler="HomeController.handlerA"}[5m])

The above query uses the built-in rate() function to return per-second rate of the http requests measured over the last 5 minutes ([5m]) for the endpoint /endpointA, as specified by the handler filters handler=”HomeController.handlerA”.  The query can be modified to only gather requests that are not successful (not 2xx) by including the label status in the query:

rate(http_requests_total{job="blog", handler="HomeController.handlerA", status!~"^2..$"}[5m])

Prometheus include 2 labels names job and instance for the target as defined in the prometheus.yml configuration file. It is recommended to always include the job name in the query.

Similarly, we can create query to get the average response time over last 5 minutes with the following query

rate(http_response_time_milliseconds_sum{job="blog"}[5m])/rate(http_response_time_milliseconds_count{job="blog"}[5m])

For more details on operations and functions Prometheus support, see their documentation here

This blog post introduces with examples how to implement application specific metrics using Prometheus JVM client library as well as how to use query functions provided by Prometheus to filter and query the data collected. The ability for developers to implement their own metrics together with support with powerful query language by Prometheus is particularly useful when one needs to implement, monitor and analysis specific application functions beyond typical JVM and application metrics.

Monitoring Spring Boot Applications with Prometheus – Part 1

This blog post will demonstrate how to use Prometheus to monitor a spring boot web application. Prometheus is an open source tool for monitoring systems by collecting metrics from target systems as time series data. It supports multiple approaches for instrumenting the application codes. I am going to show how to do this using the Prometheus JVM client library.

Instrumenting with Prometheus JVM client

POM setup

I set up a Spring Boot project in Maven and include the following dependency for the Prometheus JVM client (version 0.0.16):

 <!-- Hotspot JVM metrics -->
 <dependency>
      <groupId>io.prometheus</groupId>
      <artifactId>simpleclient_hotspot</artifactId>
      <version>${prometheus.version}</version>
 </dependency>
 <!-- Exposition servlet -->
 <dependency>
      <groupId>io.prometheus</groupId>
      <artifactId>simpleclient_servlet</artifactId>
      <version>${prometheus.version}</version>
 </dependency>
 <!-- The client -->
 <dependency>
      <groupId>io.prometheus</groupId>
      <artifactId>simpleclient</artifactId>
      <version>${prometheus.version}</version>
 </dependency>

Configure and implement Metric endpoint

The main method for Prometheus to collect metrics is via scraping an endpoint implemented by the target application on regular intervals. To do that, include a Java configuration class as follows:

@Configuration
@ConditionalOnClass(CollectorRegistry.class)
public class PrometheusConfiguration {

     @Bean
     @ConditionalOnMissingBean
     CollectorRegistry metricRegistry() {
         return CollectorRegistry.defaultRegistry;
     }

     @Bean
     ServletRegistrationBean registerPrometheusExporterServlet(CollectorRegistry metricRegistry) {
           return new ServletRegistrationBean(new MetricsServlet(metricRegistry), "/prometheus");
     }

...
}

The above code adds the endpoint (/prometheus) to the Spring Boot application. Now we are ready to add some metrics to it. The Prometheus JVM client includes a number of standard exporters to collect common JVM metrics such as memory and cpu usages. Let’s add them to our new prometheus endpoint

First, we create a exporter register class

/**
 * Metric exporter register bean to register a list of exporters to the default
 * registry
 */
public class ExporterRegister {

     private List<Collector> collectors; 

     public ExporterRegister(List<Collector> collectors) {
          for (Collector collector : collectors) {
              collector.register();
          }
          this.collectors = collectors;
     }

     public List<Collector> getCollectors() {
          return collectors;
     }

}

The above class is just a utility class to register a collection of metric collectors with the registry. Now add the standard exporters from Prometheus JVM client:

import io.prometheus.client.hotspot.MemoryPoolsExports;
import io.prometheus.client.hotspot.StandardExports;
...  
     @Bean
     ExporterRegister exporterRegister() {
           List<Collector> collectors = new ArrayList<>();
           collectors.add(new StandardExports());
           collectors.add(new MemoryPoolsExports());
           ExporterRegister register = new ExporterRegister(collectors);
           return register;
      }

We just added 2 exporters: (1) StandardExports provides CPU usage metrics and (2) MemoryPoolExports add memory usage by the JVM and host. To see what metrics are now available, go to the URL in the browser:

http://localhost:8080/prometheus

The browser should display something like below (truncated as it is too long to list)

# HELP jvm_up_time_seconds System uptime in seconds.
# TYPE jvm_up_time_seconds gauge
jvm_up_time_seconds 15.0
# HELP jvm_cpu_load_percentage JVM CPU Usage %.
# TYPE jvm_cpu_load_percentage gauge
jvm_cpu_load_percentage 37.18078068931383
# HELP os_cpu_load_percentage System CPU Usage %.
# TYPE os_cpu_load_percentage gauge
.
.
.

Install and Setup Prometheus

Now we have implemented the metric endpoint for the Spring Boot application, we are ready to install and configure Prometheus. Following the instruction here to install Prometheus and start up the server. You should now start up and access the server in your browser, e.g. http://localhost:9090/targets

blog_prom_1

By default, Prometheus is configured to monitor itself, handy. Now let’s update the configuration to scrape our Spring Boot app. Open the file prometheus.yml in the Prometheus folder and add the following lines under the scrape_configs section:

 - job_name: 'blog'

scrape_interval: 5s

 metrics_path: '/prometheus'
 static_configs:
 - targets: ['localhost:8080']

Restart Prometheus and refresh your browser to show the following:

blog_prom_2

Prometheus provides a rather basic graphing function. I will show how to integrate Prometheus with other graphing software in a later post. For now, let’s try to display memory usages of the Spring Boot application. Go to Graph tab and select from the dropdown the metric jvm_memory_bytes_used and click Execute buttonThis should end up like the following in the browser. Note the metric has two different set of time series data, head and nonheap usage.

blog_prom_3.png

Summary and What Next

In this blog, I describe how to add monitoring by Prometheus via its JVM client in a Spring Boot application, generate some JVM metrics using the provided exporter classes  and how to configure Prometheus to scrape the data. I end by demonstrating how to use the metrics with the graphing features in Prometheus.

In future blog, I will show how to implement custom metrics in the Spring Boot application using Prometheus JVM client as well as using its expression language to query time series data to return metrics relevant for monitoring purpose. I will also demonstrate how to create dashboard in Grafana using data from Prometheus.

 

Securing Multiple Resources in OAuth2 Resource Server using Spring Security

In my previous blog post I demonstrate how to setup an OAuth2 authorization server using Spring Security. In this post, I will demonstrate how to setup the security configurations in a resource server to secure multiple resources.

The example here uses Spring Boot 1.2.7 and is a standalone OAuth2 resource server which secures multiple resources with their own ids and access rules. To do that, instead of using @EnableResourceServer, we have to define a ResourceServerConfiguration bean for each resource to be secured as shown below

@Configuration
@EnableOAuth2Resource
public class OAuth2ServerConfig {
     @Bean
     protected ResourceServerConfiguration stockesources() {
          ResourceServerConfiguration resource = new ResourceServerConfiguration(){
                // Switch off the Spring Boot @Autowired configurers
                public void setConfigurers(List<ResourceServerConfigurer> configurers) {
                    super.setConfigurers(configurers);
                }
                @Override
                public int getOrder() {
                     return 30;
                }
     };
           resource.setConfigurers(Arrays.<ResourceServerConfigurer> asList(new ResourceServerConfigurerAdapter() {
                 @Override
                 public void configure(ResourceServerSecurityConfigurer resources) throws Exception {
                      resources.resourceId("stock");
                  }
                  @Override
                  public void configure(HttpSecurity http) throws Exception {
                        http.antMatcher("/stock/**").authorizeRequests().anyRequest().access("#oauth2.hasScope('read')");
                  }
             }));
            return resource;
}

To secure the resource “stock”, we first implements a bean of ResourceServerConfiguration by manually setting the configurers by calling the super.setConfigurers() method and then set the order by overriding the getOrder() method. Then, security configuration for the resource can then be set up by using the return ResourceServerConfiguration object (variable resource) by implementing an anonymous ResourceSourceServerConfigurerAdapter.

To secure another resource, just define another bean of ResourceServerConfiguration similar to the above with a different resource id and its own OAuth2 access rules. Also, the order value has to be unique.

Note that I have to override the getOrder() method in the ResourceServerConfiguration here instead of line like

     ResourceServerConfiguration resource = ...
     resource.setOrder(30)
     resource.setConfigurers(...

or spring security will throw exception like below

Caused by: java.lang.IllegalStateException: @Order on WebSecurityConfigurers must be unique. Order of 2147483626 was already used, so it cannot be used on . OAuth2ServerConfig$1@1a40489f too.

 

Configuring Multiple JPA Entity Managers In Spring Boot

This blog will demonstrate how to setup multiple entity managers in Spring to connect to different data sources. The solution here also supports Spring Data.

Update Maven Pom file

Include Spring Boot dependency for Spring Data:

 <dependency>
       <groupId>org.springframework.boot</groupId>
       <artifactId>spring-boot-starter-data-jpa</artifactId>
 </dependency>

Disable DataSourceAutoConfiguration

Since we are setting up the data sources, disable the auto configuration in Spring Boot

@Configuration
@ComponentScan
@EnableAutoConfiguration (exclude = {  DataSourceAutoConfiguration.class })
public class Application {
   ...

Configure Primary Entity Manager

Below is the Java configuration for the primary entity manager

@Profile("!test")          // 1
@Configuration
@EnableTransactionManagement
@EnableJpaRepositories(basePackages = "au.com.myblog.dao", entityManagerFactoryRef = "entityManager", transactionManagerRef = "transactionManager")        // 2
public class PrimaryMysqlDBConfiguration {
     @Bean(name = "dataSource")      // 3
     @Primary
     @ConfigurationProperties(prefix = "primary.datasource.mysql")
     public DataSource mysqlDataSource() {
          return DataSourceBuilder.create().build();
     }
    @PersistenceContext(unitName = "primary")   // 4
     @Primary
     @Bean(name = "entityManager")
     public LocalContainerEntityManagerFactoryBean mySqlEntityManagerFactory(EntityManagerFactoryBuilder builder) {
          return builder.dataSource(mysqlDataSource()).persistenceUnit("primary").properties(jpaProperties())
                    .packages("au.com.myblog.domain").build();
      }
     private Map<String, Object> jpaProperties() {
          Map<String, Object> props = new HashMap<>();
          props.put("hibernate.ejb.naming_strategy", new SpringNamingStrategy());
          return props;
      }
}

Note:

  1. @Profile annotation to use this configuration only for non test profile. This allows me to set a different datasource, e.g. H2, when running tests.
  2. @EnableJpaRepositories is used for Spring Data. Note we are using the default transaction manager setup in Spring
  3. Bean for primary datasource. The @ConfigurationProperties annotation specifies the prefix of the properties to use by this datasource. For the example here, the application.properties file should include properties like below:
    # DB Connection
    primary.datasource.mysql.url=jdbc:log4jdbc:mysql://localhost/bpmn_service
    primary.datasource.mysql.username=root
    primary.datasource.mysql.password=root
    primary.datasource.mysql.driver-class-name=net.sf.log4jdbc.DriverSpy
  4. Persistence unit name should be setup in the EntityManager bean as shown here.

Configure Secondary Entity Manager

Configuration of the secondary entity manager is very similar to that of the primary.The only thing is we have to define a new transaction manager. Make sure a different prefix is set in the configuration properties of the data source. Also, a different persistent unit name should be used.

@Profile("!test")
@Configuration
@EnableTransactionManagement
@EnableJpaRepositories(basePackages = "au.com.myblog.dao", entityManagerFactoryRef = "secondaryMySqlEntityManager", transactionManagerRef = "secondaryTransactionManager")
public class SecondaryMysqlDBConfiguration {
      @Bean
      @ConfigurationProperties(prefix = "secondary.datasource.mysql")
       public DataSource mysqlDataSource() {
            return DataSourceBuilder.create().build();
       }
      @PersistenceContext(unitName = "secondary")
      @Bean(name = "secondaryMySqlEntityManager")
      public LocalContainerEntityManagerFactoryBean mySqlEntityManagerFactory(EntityManagerFactoryBuilder builder) {
           return  builder.dataSource(mysqlDataSource()).properties(jpaProperties()).persistenceUnit("secondary").packages("au.com.myblog.domain").build();
       }
      @Bean(name = "secondaryTransactionManager")
       public PlatformTransactionManager transactionManager(EntityManagerFactoryBuilder builder) {
             JpaTransactionManager tm = new JpaTransactionManager();
             tm.setEntityManagerFactory(mySqlEntityManagerFactory(builder).getObject());
             tm.setDataSource(mysqlDataSource());
             return tm;
       }
       private Map<String, Object> jpaProperties() {
             Map<String, Object> props = new HashMap<>();
            // naming strategy to put underscores instead of camel case
            // as per auto JPA Configuration
            props.put("hibernate.ejb.naming_strategy", new SpringNamingStrategy());
            props.put("hibernate.hbm2ddl.auto", "update");
            return props;
       }

Note the setup above assumes the primary and secondary data sources are not used together in a single transaction. Hence we can use 2 independent transaction managers.

Specify which Entity Manager to use

Finally, make sure that you specify the name of the secondary transaction in the spring @Transactional annotation:

@Transactional(value = "secondaryTransactionManager")

Also, add the @PersistenceContext annotation with the unit name as defined in the configuration when injecting entity managers;

 @PersistenceContext(unitName = "secondary")
 private EntityManager entityManager;

That’s it!

Validating Spring MVC Request Mapping Method parameters

This short post demonstrates how to set up and use JSR-303 Validation for the arguments in Spring MVC request mapping methods for path variable and request parameter arguments. I am using Spring Boot v1.2.5 with Maven 3.

I. MethodValidationPostProcessor

The only configuration needed is adding the MethodValidationPostProcessor (javadoc) bean to the Spring configuration, e.g.

 @Bean
 public MethodValidationPostProcessor methodValidationPostProcessor() {
      return new MethodValidationPostProcessor();
 }

II Add validation to controller request mapping method

First, add the @Validate annotation to the controller class as follows:

@RestController
@Validated
public class HelloController {
     ...

Then add any JSR-303 validation annotation to a request mapping method arguments:

 @RequestMapping("/hi/{name}")
 public String sayHi(@Size(max = 10, min = 3, message = "name should have between 3 and 10 characters") @PathVariable("name") String name) {
      return "Hi " + name;
 }

The example codes above shows how to validate a value in the request path marked by @PathVariable. You can do the same with @RequestParam

III Validation exception handling

A ConstraintViolationException will be thrown if the size of the path variable is not within 3 to 10 characters. You may need to catch this exception and process it using a Spring MVC exception handler, for example to return the error messages in the response:

 @ExceptionHandler(value = { ConstraintViolationException.class })
 @ResponseStatus(value = HttpStatus.BAD_REQUEST)
 public String handleResourceNotFoundException(ConstraintViolationException e) {
      Set<ConstraintViolation<?>> violations = e.getConstraintViolations();
      StringBuilder strBuilder = new StringBuilder();
      for (ConstraintViolation<?> violation : violations ) {
           strBuilder.append(violation.getMessage() + "\n");
      }
      return strBuilder.toString();
 }

Custom Json Serializer and Deserializer for Joda datetime objects

This post demonstrates how to add custom Json serializer and deserializer classes for Joda datetime objects when used with Jackson JSON processor. I use LocalDateTime in the examples here but the same approach applies to other joda date(time) classes.

Default Serialization

By default, Jackson serializes the whole joda object as is. For example, consider the following object:

 public class ExampleDto {
      private LocalDateTime asDefault = LocalDateTime.now();
      ...

Serializing it with Jackson with

import com.fasterxml.jackson.databind.ObjectMapper;
...
ObjectMapper mapper = new ObjectMapper(); 
String json = mapper.writeValueAsString(new ExampleDto());

will return something like below in the json string:

{"asDefault":{"era":1,"dayOfMonth":24,"dayOfWeek":6,"dayOfYear":24,"year":2015,"yearOfEra":2015,...}

Not really useful

LocalDateTimeSerializer / LocalDateTimeDeserializer

Jackson provides an alternative for handling Joda objects. For example

public class ExampleDto {
       @JsonSerialize(using = LocalDateTimeSerializer.class)
       @JsonDeserialize(using = LocalDateTimeDeserializer.class)
       private LocalDateTime asArray = LocalDateTime.now();

This will return an array of integer representing the datetime in the json string, for example:

{"asArray":[2015,1,24,10,31,3,379]}

Custom Serializer and Deserializer

It is also possible to customize the output json by adding custom serialization and deserialization classes. For example, to generate json like:

{"custom":
     {"date":"2015-01-24",
      "time":"10:31:03"
}

The above format is useful when you have to persist the Json in NoSQL database such as MongoDB for datetime queries. To achieve this, first modify the @JsonSerialize and @JsonDeserialize annotations with the names of custom classes and then implements the custom classes:

public class ExampleDomain {
      @JsonSerialize(using = CustomLocalDateTimeSerializer.class)
      @JsonDeserialize(using = CustomLocalDateTimeDeserializer.class)
      private LocalDateTime custom = LocalDateTime.now();

Class CustomLocalDateTimeSerializer

public class CustomLocalDateTimeSerializer extends StdScalarSerializer<LocalDateTime> {

      private final static DateTimeFormatter DATE_FORMAT = DateTimeFormat.forPattern("yyyy-MM-dd");
      private final static DateTimeFormatter TIME_FORMAT = DateTimeFormat.forPattern("HH:mm:ss");
 
      public CustomLocalDateTimeSerializer() {
            super(LocalDateTime.class);
      }

      protected CustomLocalDateTimeSerializer(Class<LocalDateTime> t) {
            super(t);
      }

      @Override
      public void serialize(LocalDateTime value, JsonGenerator jgen, SerializerProvider prov      ider) throws IOException, JsonProcessingException {
            jgen.writeStartObject();
            jgen.writeStringField("date", DATE_FORMAT.print(value));
            jgen.writeStringField("time", TIME_FORMAT.print(value));
            jgen.writeEndObject();
      }
}

Class CustomLocalDateTimeDeserializer

public class CustomLocalDateTimeDeserializer extends StdScalarDeserializer<LocalDateTime> {
      private final static DateTimeFormatter DATETIME_FORMAT = DateTimeFormat.forPattern("yyyy-MM-dd HH:mm:ss");

      public CustomLocalDateTimeDeserializer() {
            super(LocalDateTime.class);
      }

      protected LocalDateTimeDeserializerMongoDb(Class<?> vc) {
            super(vc);
      }

      @Override
      public LocalDateTime deserialize(JsonParser jp, DeserializationContext ctxt) throws IOException, JsonProcessingException {
           String dateStr = null;
           String timeStr = null;
           String fieldName = null;
           
           while (jp.hasCurrentToken()) {
                JsonToken token = jp.nextToken();
                if (token == JsonToken.FIELD_NAME) {
                     fieldName = jp.getCurrentName();
                } else if (token == JsonToken.VALUE_STRING) {
                     if (StringUtils.equals(fieldName, "date")) {
                          dateStr = jp.getValueAsString();
                     } else if (StringUtils.equals(fieldName, "time")) {
                          timeStr = jp.getValueAsString();
                     } else {
                          throw new JsonParseException("Unexpected field name", jp.getTokenLocation());
                     }
                } else if (token == JsonToken.END_OBJECT) {
                     break;
                }
           }
           if (dateStr != null && timeStr != null) {
                  LocalDateTime dateTime = LocalDateTime.parse(dateStr + " " + timeStr, DATETIME_FORMAT);
                  return dateTime;
            }
         return null;
      }
}

Note how the JsonParser is used in the deserializer. Also, the datetime formats used in the serializer and deserializer must match.