Building Serverless APIs with Spring Boot, AWS Lambda and API Gateway

This post demonstrates how to expose a RESTful API implemented with Spring MVC in a Spring Boot application as a Lambda function to be deployed via AWS API Gateway. We will be using the aws-serverless-java-container package which supports native API gateway’s proxy integration models for requests and responses.

Project Setup

Create a new Spring Boot project e.g. using the Spring Initializer or modify an existing project to include the aws-serverless-java-container package dependency:

<dependency> 
      <groupId>com.amazonaws.serverless</groupId> 
      <artifactId>aws-serverless-java-container-spring</artifactId> 
      <version>1.1</version> 
</dependency>

We can remove the Spring Boot Maven Plugin from the pom file. Instead, add the Maven Shade Plugin and remove the embedded Tomcat from the deployed package:

<plugins> 
   <plugin> 
      <groupId>org.apache.maven.plugins</groupId>
      <artifactId>maven-shade-plugin</artifactId>
      <configuration> 
        <createDependencyReducedPom>false</createDependencyReducedPom> 
      </configuration> 
      <executions> 
        <execution> 
          <phase>package</phase>
          <goals> 
            <goal>shade</goal> 
          </goals> 
          <configuration> 
             <artifactSet> 
                <excludes> 
                   <exclude>org.apache.tomcat.embed:*</exclude>
                </excludes> 
             </artifactSet>
          </configuration> 
        </execution>
      </executions> 
   </plugin>
</plugins>

Serverless API

1. HelloController

Implement RESTful APIs using Spring MVC as usual. For example:

package com.madman.lambda;
...
@RestController
public class HelloController {
 
     @RequestMapping(path = "/greeting", method = RequestMethod.GET) 
     public GreetingDto sayHello(@RequestParam String name) { 
          String message = "Hello " + name; 
          GreetingDto dto = new GreetingDto();
          dto.setMessage(message); return dto; 
     }

     ...
}

2. StreamLambdaHandler

To deploy Java codes to run as AWS Lambda function, it needs to implement the handler interface RequestStreamHandler. The aws-serverless-java-container library makes it rather straight forward:

...
public class StreamLambdaHandler implements RequestStreamHandler {
    private static Logger logger = LoggerFactory.getLogger(StreamLambdaHandler.class);     

    public static final SpringBootLambdaContainerHandler<AwsProxyRequest, AwsProxyResponse> handler;
 
    static { 
       try { 
           handler = SpringBootLambdaContainerHandler.getAwsProxyHandler(HelloLambdaApplication.class);
       } catch (ContainerInitializationException e) { 
           // if we fail here. We re-throw the exception to force another cold start 
           String errMsg = "Could not initialize Spring Boot application"; 
           logger.error(errMsg); 
           throw new RuntimeException("Could not initialize Spring Boot application", e); 
       } 
    }

    @Override 
    public void handleRequest(InputStream inputStream, OutputStream outputStream, Context context) throws IOException {
        handler.proxyStream(inputStream, outputStream, context);
        // just in case it wasn't closed 
        outputStream.close(); 
    }
}

The class StreamLambdaHandler implements the AWS Lambda predefined handler  interface RequestStreamHandler for handling events.

Note the handling of the Lambda events is delegated to the class SpringBootLambdaContainerHandler.

3. HelloLambdaApplication

Note the SpringBootLambdaContainerHandler.getAwsProxyHandler method is provided with a Spring web application initializer interface, which is implemented by the main Spring Boot Application class by extending the implementing class SpringBootServletInitializer :

@SpringBootApplication
@ComponentScan(basePackages = "com.madman.lambda.controller")
public class HelloLambdaApplication extends SpringBootServletInitializer {
 
     public static void main(String[] args) { 
           SpringApplication.run(HelloLambdaApplication.class, args);
     }
}

4. HelloControllerTest

The aws-serverless-java-container library also supports integration testing the proxy API. Below is integration test for HelloController:

@RunWith(SpringJUnit4ClassRunner.class)
@ContextConfiguration(classes = { HelloLambdaApplication.class })
@WebAppConfiguration
public class HelloControllerTest {
    private MockLambdaContext lambdaContext;
    private SpringBootLambdaContainerHandler<AwsProxyRequest, AwsProxyResponse> handler;
    
    @Autowired 
    private ObjectMapper mapper;
 
    public HelloControllerTest() { 
       lambdaContext = new MockLambdaContext(); 
       this.handler = StreamLambdaHandler.handler; 
    }

    @Test public void testGreetingApi() throws JsonParseException, JsonMappingException, IOException {
       AwsProxyRequest request = new AwsProxyRequestBuilder("/greeting", "GET").queryString("name", "John").build(); 
       AwsProxyResponse response = handler.proxy(request, lambdaContext);
      
       assertThat(response.getStatusCode(), equalTo(200)); 
       GreetingDto responseBody = mapper.readValue(response.getBody(), GreetingDto.class);
       asserThat(responseBody.getMessage(), equalTo("Hello John")); 
    }
}

Deploying to AWS

The full source codes can be found in GitHub here. Run Maven to build the jar file and deploy it as Lambda function. I use the AWS Toolkit for Eclipse to deploy the jar package to AWS. Refer to AWS documentation for more options and information on deploying Lambda applications.

To setup AWS API Gateway as trigger for the Lambda function:

  1. Create a New API
  2. Create Resource
    1. Configure as proxy resource
    2. Resource Name: greeting
  3. Create Method
    1. Get
    2. Integration type: Lambda Function
    3. Use Lambda Proxy Integration: true
    4. Lambda Function: <Name of Lambda function>

You should then be able to test the API with the AWS console (as screenshot below).

blog_apigateway

A couple of things to note:

  1. Cold start – the Java container takes a good few seconds. The latency is ok once it’s warmed up
  2. Fat jar – the Spring Boot jar in this example is around 13MB which is still ok for Lambda (limit 50MB)
Advertisements

Spring for Apache Kafka Quick Start

In this blog, I setup a basic Spring Boot project for developing Kafka based messaging system using Spring for Apache Kafka. The project also includes basic Spring config required for publishing and listening to messages from Kafka broker.

Project Setup

The following tools and versions are used here:

  1. Maven 3.x
  2. Spring Kafka 1.3.2 (current release version)
  3. Kafka client 0.11.0.2
  4. Spring Boot 1.5.9

The current Spring Boot release version (1.5.9) has Spring Kafka version  1.1.7 as the managed version. I have to override this to use 1.3.2. My Maven pom file fragment as below:

 <parent>
      <groupId>org.springframework.boot</groupId>
      <artifactId>spring-boot-starter-parent</artifactId>
      <version>1.5.9.RELEASE</version>
      <relativePath /> <!-- lookup parent from repository -->
 </parent>

 <dependencies>
      <dependency>
           <groupId>org.springframework.kafka</groupId>
           <artifactId>spring-kafka</artifactId>
           <version>1.3.2.RELEASE</version>
      </dependency>
      
      <dependency>
           <groupId>org.springframework.kafka</groupId>
           <artifactId>spring-kafka-test</artifactId>
           <version>1.3.2.RELEASE</version>
           <scope>test</scope>
      </dependency>

      <dependency>
           <groupId>org.apache.kafka</groupId>
           <artifactId>kafka-clients</artifactId>
           <version>0.11.0.2</version>
      </dependency>

Producer Config

Spring Boot provides auto configuration for connecting to Kafka but I find it useful to setup the beans myself. Spring Kafka adopts the same approach to Kafka as in other message brokers such as ActiveMQ. For publishing message a template, KafkaTemplate, as to be configured as with JmsTemplate for ActiveMQ.

The following is my Java Config for a KafkaTemplate to publish message to the Kafka broker

@Configuration
public class KafkaProducerConfig {

     @Value("${spring.kafka.bootstrap-servers}") // (1)
     private String brokerAsString;
 
     @Bean
     public ProducerFactory<Integer, String> producerFactory() {
          return new DefaultKafkaProducerFactory<>(producerConfigs());
     }

     @Bean
     public Map<String, Object> producerConfigs() {
          Map<String, Object> props = new HashMap<>();
          props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, brokerAsString);
          props.put(ProducerConfig.RETRIES_CONFIG, 0);
          props.put(ProducerConfig.BATCH_SIZE_CONFIG, 16384);
          props.put(ProducerConfig.LINGER_MS_CONFIG, 1);
          props.put(ProducerConfig.BUFFER_MEMORY_CONFIG, 33554432);
          props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, IntegerSerializer.class);
          props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
          return props;
      }

     @Bean
     public KafkaTemplate<Integer, String> kafkaTemplate() {
          return new KafkaTemplate<Integer, String>(producerFactory());
     }
}

Note:

  1. The broker address is set using the property spring.kafka.bootstrap-servers defined in the application.properties (or yml) file. For example,
// application.properties
spring.kafka.bootstrap-servers=http://localhost:9092

Consumer Config

Consuming messages from Kafka using Spring Kafka is similar to consuming messages from Active MQ using Spring JMS support. We need to define container factory and message listener. Below is my Java Config for message listener factory.

@Configuration
public class KafkaConsumerConfig {
 
     @Value("${spring.kafka.bootstrap-servers}")
     private String brokerAsString;

     @Value("${spring.kafka.consumer.group-id}")
     private String groupId;
 
     @Value("${spring.kafka.consumer.auto-offset-reset}")
     private String autoOffsetReset;
 
     @Bean
     ConcurrentKafkaListenerContainerFactory<Integer, String> kafkaListenerContainerFactory() {
          ConcurrentKafkaListenerContainerFactory<Integer, String> factory = new ConcurrentKafkaListenerContainerFactory<>();
          factory.setConsumerFactory(consumerFactory());
          return factory;
     }

     @Bean
     public ConsumerFactory<Integer, String> consumerFactory() {
         return new DefaultKafkaConsumerFactory<>(consumerConfigs());
     }

     @Bean
     public Map<String, Object> consumerConfigs() {
         Map<String, Object> props = new HashMap<>();
         props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, brokerAsString);
         props.put(ConsumerConfig.GROUP_ID_CONFIG, groupId);
         props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, autoOffsetReset);
         props.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, true);
         props.put(ConsumerConfig.AUTO_COMMIT_INTERVAL_MS_CONFIG, "100");
         props.put(ConsumerConfig.SESSION_TIMEOUT_MS_CONFIG, "15000");
         props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, IntegerDeserializer.class);
         props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
         return props;
     }
}

Now we can listen to a Kafka topic by using the annotation @KafkaListener. For example

@Service
public class GreetingsTopicListener {

 private Logger logger = LoggerFactory.getLogger(getClass());
 
 @KafkaListener(topics = "greetings")
 public void listen(ConsumerRecord<?,?> cr) throws Exception {
      logger.info(cr.toString());
 }
}

@KafkaListener will use the default listener container factory defined in class ConsumerConfig above to create the message listener. It is also possible to override this by settig the containerFactory attribute in the annotation. See javadoc for more details.

Creating Topics

It is also possible to automatically add topics to the broker by defining @Beans using the new 0.11.0.x client library class AdminClient as in the Spring Kafka reference documentation

@Configuration
public class KafkaTopicConfig {
 
 @Value("${spring.kafka.bootstrap-servers}")
 private String brokerAsString;
 
 @Bean
 public KafkaAdmin admin() {
   Map<String, Object> configs = new HashMap<>();
   configs.put(AdminClientConfig.BOOTSTRAP_SERVERS_CONFIG, brokerAsString);
   return new KafkaAdmin(configs);
 }

 @Bean
 public NewTopic topic1() {
   return new NewTopic("foo", 10, (short) 2);
 }

 @Bean
 public NewTopic topic2() {
   return new NewTopic("bar", 10, (short) 2);
 }
}

That’s about it. The codes included in this blog should be sufficient for setting up a Spring Boot project for a messaging system using Spring Kafka.

Integrate Docker with Maven for Spring Boot projects

This blog will demonstrate how to setup in Maven using a number of plugins to integrate Docker in a Spring Boot Maven project. The objective here is to rebuild the docker image for the project seamlessly whenever Maven is run to build and release a new jar file.

The source codes for this project can be found in here

POM File

The project pom.xml file build life cycle is updated to include 3 plugins as shown below:

<build>
 <plugins>
    <plugin>
       <groupId>org.springframework.boot</groupId>
       <artifactId>spring-boot-maven-plugin</artifactId>
    </plugin>
    <plugin>
       <artifactId>maven-resources-plugin</artifactId>
       <executions>
          <execution>
             <id>copy-resources</id>
             <phase>process-resources</phase>
             <goals>
                 <goal>copy-resources</goal>
             </goals>
             <configuration>
               <outputDirectory>${basedir}/target</outputDirectory>
               <resources>
                  <resource>
                     <directory>src/main/resources/docker</directory>
                     <includes>
                        <include>Dockerfile</include>
                     </includes>
                  </resource>
               </resources>
             </configuration>
           </execution>
        </executions>
     </plugin>
     <plugin>
        <groupId>com.google.code.maven-replacer-plugin</groupId>
        <artifactId>replacer</artifactId>
        <version>1.5.3</version>
        <executions>
            <execution>
               <phase>prepare-package</phase>
               <goals>
                   <goal>replace</goal>
               </goals>
            </execution>
        </executions>
        <configuration>
             <file>${basedir}/target/Dockerfile</file>
             <replacements>
                 <replacement>
                     <token>IMAGE_VERSION</token>
                     <value>${project.version}</value>
                 </replacement>
             </replacements>
         </configuration>
     </plugin>
     <plugin>
         <groupId>com.spotify</groupId>
         <artifactId>dockerfile-maven-plugin</artifactId>
         <version>${version.dockerfile-maven}</version>
    <executions>
       <execution>
       <id>default</id>
       <goals>
          <goal>build</goal>
          <goal>push</goal>
       </goals>
       </execution>
     </executions>
     <configuration>
          <contextDirectory>${project.build.directory}</contextDirectory>
          <repository>image name here</repository>
          <tag>${project.version}</tag>
     </configuration>
    </plugin>
  </plugins>
</build>

The pom file assumes the Dockerfile can be found in the source folder /src/resource/docker. You can define your own Dockerfile as needed for your project. For demo purpose, I am using, with minor modification, the sample Dockerfile found in this blog

FROM frolvlad/alpine-oraclejdk8:slim
VOLUME /tmp
ADD docker-maven-IMAGE_VERSION.jar app.jar
RUN sh -c 'touch /app.jar'
ENV JAVA_OPTS=""
ENTRYPOINT [ "sh", "-c", "java $JAVA_OPTS -Djava.security.egd=file:/dev/./urandom -jar /app.jar" ]

Note the tag IMAGE_VERSION, this will be replaced with the version of the jar file being built

The first step in the build is to copy the Dockerfile above to the /target, i.e. build output, folder using the resource plugin. This is needed for 2 reasons: (1) we need to set the filename to add to the image to match that of the version being built and (2) Docker does not allow in ADD source file outside of the context directory of the Dockerfile so we have to put it in same directory as the jar file.

The second step is to set in the Dockerfile just copied the correct version of the jar file to be included in the docker image. The Maven Replacer plugin is used to replace the tag IMAGE_VERSION in the Dockerfile with the Maven variable project.version.

Finally, we run the Spotify Dockerfile Maven plugin to build/push the docker image. The plugin allows you to set what repository to use. Note we set the tag to be that of the Maven variable project.version as in step 2 to make sure that the image tag matches that of the jar file.

Running Maven

Now whenever the project is build or deploy in Maven using the standard mvn install or mvn deploy, the corresponding docker image will also be build or pushed to the repository. This also works for mvn release:prepare and mvn release:prepare for releasing tag version of the jar file.

Note you would need to setup certificate required to access the Docker daemon. For example, include the following environment variables:

DOCKER_HOST // to <host ip address>
DOCKER_TLS_VERIFY = true
DOCKER_CERT_PATH = C:\Users\<username>\.docker\machine\certs

Consult Docker documentation for more details on secure access to the Docker daemon.

Below is an excerpt of what you would see in a console when running mvn install

[INFO] ------------------------------------------------------------------------
[INFO] Building docker-maven 0.0.1-SNAPSHOT
[INFO] ------------------------------------------------------------------------
...
[INFO] --- maven-resources-plugin:2.6:copy-resources (copy-resources) @ docker-maven ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] Copying 1 resource
[INFO] 
[INFO] --- maven-compiler-plugin:3.1:compile (default-compile) @ docker-maven ---
...
[INFO] --- replacer:1.5.3:replace (default) @ docker-maven ---
[INFO] Replacement run on 1 file.
[INFO] 
[INFO] --- maven-jar-plugin:2.6:jar (default-jar) @ docker-maven ---
[INFO] Building jar: D:\src\blog_docker_maven\target\docker-maven-0.0.1-SNAPSHOT.jar
...
[INFO] Image will be built as <image name here>
[INFO] 
[INFO] Step 1/7 : FROM frolvlad/alpine-oraclejdk8:slim
[INFO] Pulling from frolvlad/alpine-oraclejdk8
...

Monitoring Spring Boot Applications with Prometheus – Part 2

In my previous post, I describe how to use Prometheus and its JVM client library in a Spring Boot application to gather common JVM metrics. In this blog, I will demonstrate how to write your own application specific metrics using the client library. Prometheus client libraries support 4 core metric types: Counter, Gauge, Histogram and Summary. Its documentation has a brief but clear explanation of each metric types.

I will create 2 metrics to gather statistics about incoming requests to a Spring web application. In particular, a counter to count how many requests the web application handles and a summary for measuring the processing time of the incoming requests.

Application Set Up

To demonstrate the metrics we are going to implement, I have the following controller implementing two endpoints /endpointA and /endpointB. They do nothing except wasting a random amount of time between 0 and 100 ms.

@RestController
public class HomeController {
     private Logger logger = LoggerFactory.getLogger(getClass());
 
     @RequestMapping("/endpointA")
     public void handlerA() throws InterruptedException {
          logger.info("/endpointA");
          Thread.sleep(RandomUtils.nextLong(0, 100));
     }
 
     @RequestMapping("/endpointB")
     public void handlerB() throws InterruptedException {
          logger.info("/endpointB");
          Thread.sleep(RandomUtils.nextLong(0, 100));
     }
}

Counter Metric

A counter is a metric of numeric value that never goes down. Here we add a request handler to keep track of the cumulative number of requests the web application receives:

import io.prometheus.client.Counter;
..
public class RequestCounterInterceptor extends HandlerInterceptorAdapter {

     // @formatter:off
     // Note (1)
     private static final Counter requestTotal = Counter.build()
          .name("http_requests_total")
          .labelNames("method", "handler", "status")
          .help("Http Request Total").register();
     // @formatter:on

     @Override
     public void afterCompletion(HttpServletRequest request, HttpServletResponse response, Object handler, Exception e)
 throws Exception {
          // Update counters
          String handlerLabel = handler.toString();
          // get short form of handler method name
          if (handler instanceof HandlerMethod) {
               Method method = ((HandlerMethod) handler).getMethod();
               handlerLabel = method.getDeclaringClass().getSimpleName() + "." + method.getName();
          }
          // Note (2)
          requestTotal.labels(request.getMethod(), handlerLabel, Integer.toString(response.getStatus())).inc();
     }
}

Note:

  1. We implement a counter using io.prometheus.client.Counter class to keep track of number of incoming requests handled by this handler. The counter is named http_requests_total and consists of a number of labels (method, handler and status). A label is an attribute of a metric which can be used in query to filter and aggregate metrics.
  2. The counter is incremented using the Counter’s inc() method. The values of the metric labels method, handler and status are populated with the request http method (get/post), the spring mvc controller and method, and response http status respectively.

Summary Metric

A summary is a complex metric which track a number of observations as well as their counts. See here for a comprehensive explanation of summary and histogram metrics. Similar to previous section, we implement the metric within a request handler class:

import io.prometheus.client.Summary;
...
public class RequestTimingInterceptor extends HandlerInterceptorAdapter {

      private static final String REQ_PARAM_TIMING = "timing";

      // @formatter:off
      // Note (1)
      private static final Summary responseTimeInMs = Summary
           .build()
           .name("http_response_time_milliseconds")
           .labelNames("method", "handler", "status")
           .help("Request completed time in milliseconds")
           .register();
      // @formatter:on

      @Override
      public boolean preHandle(HttpServletRequest request, HttpServletResponse response, Object handler) throws Exception {
           // Note (2)
           request.setAttribute(REQ_PARAM_TIMING, System.currentTimeMillis());
           return true;
      }

      @Override
      public void afterCompletion(HttpServletRequest request, HttpServletResponse response, Object handler, Exception ex)
 throws Exception {
           Long timingAttr = (Long) request.getAttribute(REQ_PARAM_TIMING);
           long completedTime = System.currentTimeMillis() - timingAttr;
           String handlerLabel = handler.toString();
           // get short form of handler method name
           if (handler instanceof HandlerMethod) {
                Method method = ((HandlerMethod) handler).getMethod();
                handlerLabel = method.getDeclaringClass().getSimpleName() + "." + method.getName();
           }
         // Note (3)
         responseTimeInMs.labels(request.getMethod(), handlerLabel, Integer.toString(response.getStatus())).observe(
 completedTime);
      }
}

Note:

  1. We implement a metric with class io.prometheus.client.Summary. The same set of labels as used in the request counter is used here.
  2. The response time is measured here as time lapsed between the calls to preHandle and afterCompletion methods. This is just for illustration only so I can demonstrate the use of Prometheus client library.
  3. The summary metric is updated by calling the observe() method with the response time value.

Collecting Metrics

Now that we implement the request handler interceptors, we can register them to the test endpoints and start up the Spring Boot web application. Hit the two endpoints using the browser and then go to the Prometheus url, e.g. http://localhost:8080/prometheus, should return something similar to the following:

...
# HELP http_response_time_milliseconds Request completed time in milliseconds
# TYPE http_response_time_milliseconds summary
http_response_time_milliseconds_count{method="GET",handler="HomeController.handlerA",status="200",} 3.0
http_response_time_milliseconds_sum{method="GET",handler="HomeController.handlerA",status="200",} 169.0
http_response_time_milliseconds_count{method="GET",handler="HomeController.handlerB",status="200",} 1.0
http_response_time_milliseconds_sum{method="GET",handler="HomeController.handlerB",status="200",} 59.0
# HELP http_requests_total Http Request Total
# TYPE http_requests_total counter
http_requests_total{method="GET",handler="HomeController.handlerA",status="200",} 3.0
http_requests_total{method="GET",handler="HomeController.handlerB",status="200",} 1.0

Note the summary metric http_response_time_milliseconds actually collects two time series data: _count and _sum. In practice, we don’t require a separate counter metric. It is more to demonstrate how to implement a simple counter with the Prometheus client library.

Also, data for each metric are collected separately for each distinct set of label values. In this example, we have metrics for each individual endpoints given the label values for the label handler is different. Since we also include the response status as a label, we will potentially get other metrics with non 200 status value, for example

http_response_time_milliseconds_count{method="GET",handler="HomeController.handlerA",status="500",} 3.0

Queries in Prometheus

One of the useful features of Prometheus is its powerful query language for manipulating the time series data collected.

For example, the cumulative value of total number of http requests of the counter we implement is not of much use. Typically we would want to get the number of requests a server received per second to get an idea of the system load at various time of the day. This can be achieved by query like the following:

rate(http_requests_total{job="blog", handler="HomeController.handlerA"}[5m])

The above query uses the built-in rate() function to return per-second rate of the http requests measured over the last 5 minutes ([5m]) for the endpoint /endpointA, as specified by the handler filters handler=”HomeController.handlerA”.  The query can be modified to only gather requests that are not successful (not 2xx) by including the label status in the query:

rate(http_requests_total{job="blog", handler="HomeController.handlerA", status!~"^2..$"}[5m])

Prometheus include 2 labels names job and instance for the target as defined in the prometheus.yml configuration file. It is recommended to always include the job name in the query.

Similarly, we can create query to get the average response time over last 5 minutes with the following query

rate(http_response_time_milliseconds_sum{job="blog"}[5m])/rate(http_response_time_milliseconds_count{job="blog"}[5m])

For more details on operations and functions Prometheus support, see their documentation here

This blog post introduces with examples how to implement application specific metrics using Prometheus JVM client library as well as how to use query functions provided by Prometheus to filter and query the data collected. The ability for developers to implement their own metrics together with support with powerful query language by Prometheus is particularly useful when one needs to implement, monitor and analysis specific application functions beyond typical JVM and application metrics.

Monitoring Spring Boot Applications with Prometheus – Part 1

This blog post will demonstrate how to use Prometheus to monitor a spring boot web application. Prometheus is an open source tool for monitoring systems by collecting metrics from target systems as time series data. It supports multiple approaches for instrumenting the application codes. I am going to show how to do this using the Prometheus JVM client library.

Instrumenting with Prometheus JVM client

POM setup

I set up a Spring Boot project in Maven and include the following dependency for the Prometheus JVM client (version 0.0.16):

 <!-- Hotspot JVM metrics -->
 <dependency>
      <groupId>io.prometheus</groupId>
      <artifactId>simpleclient_hotspot</artifactId>
      <version>${prometheus.version}</version>
 </dependency>
 <!-- Exposition servlet -->
 <dependency>
      <groupId>io.prometheus</groupId>
      <artifactId>simpleclient_servlet</artifactId>
      <version>${prometheus.version}</version>
 </dependency>
 <!-- The client -->
 <dependency>
      <groupId>io.prometheus</groupId>
      <artifactId>simpleclient</artifactId>
      <version>${prometheus.version}</version>
 </dependency>

Configure and implement Metric endpoint

The main method for Prometheus to collect metrics is via scraping an endpoint implemented by the target application on regular intervals. To do that, include a Java configuration class as follows:

@Configuration
@ConditionalOnClass(CollectorRegistry.class)
public class PrometheusConfiguration {

     @Bean
     @ConditionalOnMissingBean
     CollectorRegistry metricRegistry() {
         return CollectorRegistry.defaultRegistry;
     }

     @Bean
     ServletRegistrationBean registerPrometheusExporterServlet(CollectorRegistry metricRegistry) {
           return new ServletRegistrationBean(new MetricsServlet(metricRegistry), "/prometheus");
     }

...
}

The above code adds the endpoint (/prometheus) to the Spring Boot application. Now we are ready to add some metrics to it. The Prometheus JVM client includes a number of standard exporters to collect common JVM metrics such as memory and cpu usages. Let’s add them to our new prometheus endpoint

First, we create a exporter register class

/**
 * Metric exporter register bean to register a list of exporters to the default
 * registry
 */
public class ExporterRegister {

     private List<Collector> collectors; 

     public ExporterRegister(List<Collector> collectors) {
          for (Collector collector : collectors) {
              collector.register();
          }
          this.collectors = collectors;
     }

     public List<Collector> getCollectors() {
          return collectors;
     }

}

The above class is just a utility class to register a collection of metric collectors with the registry. Now add the standard exporters from Prometheus JVM client:

import io.prometheus.client.hotspot.MemoryPoolsExports;
import io.prometheus.client.hotspot.StandardExports;
...  
     @Bean
     ExporterRegister exporterRegister() {
           List<Collector> collectors = new ArrayList<>();
           collectors.add(new StandardExports());
           collectors.add(new MemoryPoolsExports());
           ExporterRegister register = new ExporterRegister(collectors);
           return register;
      }

We just added 2 exporters: (1) StandardExports provides CPU usage metrics and (2) MemoryPoolExports add memory usage by the JVM and host. To see what metrics are now available, go to the URL in the browser:

http://localhost:8080/prometheus

The browser should display something like below (truncated as it is too long to list)

# HELP jvm_up_time_seconds System uptime in seconds.
# TYPE jvm_up_time_seconds gauge
jvm_up_time_seconds 15.0
# HELP jvm_cpu_load_percentage JVM CPU Usage %.
# TYPE jvm_cpu_load_percentage gauge
jvm_cpu_load_percentage 37.18078068931383
# HELP os_cpu_load_percentage System CPU Usage %.
# TYPE os_cpu_load_percentage gauge
.
.
.

Install and Setup Prometheus

Now we have implemented the metric endpoint for the Spring Boot application, we are ready to install and configure Prometheus. Following the instruction here to install Prometheus and start up the server. You should now start up and access the server in your browser, e.g. http://localhost:9090/targets

blog_prom_1

By default, Prometheus is configured to monitor itself, handy. Now let’s update the configuration to scrape our Spring Boot app. Open the file prometheus.yml in the Prometheus folder and add the following lines under the scrape_configs section:

 - job_name: 'blog'

scrape_interval: 5s

 metrics_path: '/prometheus'
 static_configs:
 - targets: ['localhost:8080']

Restart Prometheus and refresh your browser to show the following:

blog_prom_2

Prometheus provides a rather basic graphing function. I will show how to integrate Prometheus with other graphing software in a later post. For now, let’s try to display memory usages of the Spring Boot application. Go to Graph tab and select from the dropdown the metric jvm_memory_bytes_used and click Execute buttonThis should end up like the following in the browser. Note the metric has two different set of time series data, head and nonheap usage.

blog_prom_3.png

Summary and What Next

In this blog, I describe how to add monitoring by Prometheus via its JVM client in a Spring Boot application, generate some JVM metrics using the provided exporter classes  and how to configure Prometheus to scrape the data. I end by demonstrating how to use the metrics with the graphing features in Prometheus.

In future blog, I will show how to implement custom metrics in the Spring Boot application using Prometheus JVM client as well as using its expression language to query time series data to return metrics relevant for monitoring purpose. I will also demonstrate how to create dashboard in Grafana using data from Prometheus.

 

Setup Spring Security with Active Directory LDAP in Spring Boot Web Application

This post illustrates how to set up Spring Security in Spring Boot configuration with Active Directory LDAP for a Spring MVC web application. I will also show what needs to be configured for the embedded tomcat to accept HTTPS.

Spring Security with LDAP

To configure Spring Security in Spring Boot, add the following Configuration class to your project. Note the use of annotation @EnableWebMvcSecurity. The configuration class extends the WebSecurityConfigurerAdapter class in Spring Security. More information can be found in the Spring Security Reference here.

@Configuration
@EnableWebMvcSecurity
public class WebSecurityConfig extends WebSecurityConfigurerAdapter {

     @Value("${ldap.domain}")
     private String DOMAIN;

     @Value("${ldap.url}")
     private String URL;

     @Value("${http.port}")
     private int httpPort;

     @Value("${https.port}")
     private int httpsPort;

     @Override
     protected void configure(HttpSecurity http) throws Exception {
          /*
           * Set up your spring security config here. For example...
          */
          http.authorizeRequests().anyRequest().authenticated().and().formLogin().loginUrl("/login").permitAll();
          /*
           * Use HTTPs for ALL requests
          */
          http.requiresChannel().anyRequest().requiresSecure();
          http.portMapper().http(httpPort).mapsTo(httpsPort);
     }

     @Override
     protected void configure(AuthenticationManagerBuilder authManagerBuilder) throws Exception {
          authManagerBuilder.authenticationProvider(activeDirectoryLdapAuthenticationProvider()).userDetailsService(userDetailsService());
     }

     @Bean
     public AuthenticationManager authenticationManager() {
          return new ProviderManager(Arrays.asList(activeDirectoryLdapAuthenticationProvider()));
     }
     @Bean
     public AuthenticationProvider activeDirectoryLdapAuthenticationProvider() {
          ActiveDirectoryLdapAuthenticationProvider provider = new ActiveDirectoryLdapAuthenticationProvider(DOMAIN, URL);
          provider.setConvertSubErrorCodesToExceptions(true);
          provider.setUseAuthenticationRequestCredentials(true);
          return provider;
     }
}

Add HTTPS connector for embedded Tomcat in Spring Boot

Now that Spring Security is set up, you need to update the web server to accept requests from HTTPS. To do that using the embedded Tomcat server in Spring Boot, add the following EmbeddedServletContainerCustomizer bean to the application configuration as shown below. Note I am using anonymous inner classes here instead of lambda expression as I see in other examples for Java 7 compatibility. You will need a keystore file for this to work.

@Bean
EmbeddedServletContainerCustomizer containerCustomizer (

     @Value("${https.port}") final int port, 
     @Value("${keystore.file}") Resource keystoreFile,
     @Value("${keystore.alias}") final String alias, 
     @Value("${keystore.password}") final String keystorePass,
     @Value("${keystore.type}") final String keystoreType) throws Exception {
          final String absoluteKeystoreFile = keystoreFile.getFile().getAbsolutePath();
          return new EmbeddedServletContainerCustomizer() {
               public void customize(ConfigurableEmbeddedServletContainer container) {
                    TomcatEmbeddedServletContainerFactory tomcat = (TomcatEmbeddedServletContainerFactory) container;
                    tomcat.addConnectorCustomizers(new TomcatConnectorCustomizer() {
                         public void customize(Connector connector) {
                              connector.setPort(port);
                              connector.setSecure(true);
                              connector.setScheme("https");
                              Http11NioProtocol proto = (Http11NioProtocol) connector.getProtocolHandler();
                              proto.setSSLEnabled(true);
                              proto.setKeystoreFile(absoluteKeystoreFile);
                              proto.setKeyAlias(alias);
                              proto.setKeystorePass(keystorePass);
                              proto.setKeystoreType(keystoreType);
                        }
               });
           }
     };
 }

 

Getting Start with Spring Boot Configurations

Spring Boot allows development of Spring applications with minimum configuration. This is particular useful for developing microservices. This blog post will demonstrate a few things that may help in understanding how Spring Boot does its job of auto configuring a Spring application.

Auto-Configuration

So what has been configured?

The main feature of Spring Boot is its ability to automatically configuration the Spring application based on its included jar files. So the first thing you may want to look at is what have been configured for you. This can be done by running the application with the debug flag either by adding “–debug” to the command line or JVM argument “-Ddebug“. You will then see the “Auto Configuration Report” displayed in your console like below:

=========================
AUTO-CONFIGURATION REPORT
=========================

Positive matches:
—————–

AopAutoConfiguration
– @ConditionalOnClass classes found: org.springframework.context.annotation.EnableAspectJAutoProxy,org.aspectj.lang.annotation.Aspect,org.aspectj.lang.reflect.Advice (OnClassCondition)
– SpEL expression on org.springframework.boot.autoconfigure.aop.AopAutoConfiguration: ${spring.aop.auto:true} (OnExpressionCondition)

AopAutoConfiguration.JdkDynamicAutoProxyConfiguration
– SpEL expression on org.springframework.boot.autoconfigure.aop.AopAutoConfiguration$JdkDynamicAutoProxyConfiguration: !${spring.aop.proxyTargetClass:false} (OnExpressionCondition)

// rest omitted…

Negative matches:
—————–

ActiveMQAutoConfiguration
– required @ConditionalOnClass classes not found: javax.jms.ConnectionFactory,org.apache.activemq.ActiveMQConnectionFactory (OnClassCondition)

// rest omitted…

The report should give you an indication of what have been configured for you. It depends mainly on what jar files you have included in your project dependencies.

Excluding an auto configuration

Spring Boot is designed so that you can gradually replaced the auto-configuration as needed. To exclude an auto configuration, use the exclude attribute of the @EnableAutoConfiguration annotation as below:

@Configuration
@ComponentScan
@EnableAutoConfiguration(exclude = ProcessEngineAutoConfiguration.class)
@ImportResource("classpath:/activiti.xml")
public class Application {

public static void main(String[] args) {
SpringApplication.run(Application.class, args);
}
}

I am using the Activiti Spring Boot module here. Note instead of relying on its auto configuration setup, I have included the activiti.xml configuration file using the @ImportResource annotation to manually configure the process engine used by Activiti.

Configuration properties

Application properties, e.g. JDBC connection string, are to be set in the application.properties file in the classpath. Profile specific properties should be included in a separate application-<profile>.properties file located at the same directory of the application.properties file.

A list of commonly used property keys can be found in the Reference Guide (here). For example, to change the embedded web server port to 8181, add the following line the the application.properties file

### application.properties

server.port=8181

Logging

You may configure Spring Boot to use logging framework of your choice. But first, it may be useful to configure the logger properties and understand what has been configured for you. To do this, add the property of the format logging.level.<package name>=<level> to your application.properties file. For example, to display debug messages for Hibernate, add the following lilne:

logging.level.org.hibernate=DEBUG

That’s it. The above has helped me to get started with Spring Boot, to understand how auto configuration works. The Reference Guide provides a comprehensive documentation of the framework and various how-tos. There are also many tutorials and blog articles around for reference.