RouteReuseStrategy for Route Caching in Angular

This post explains how to implement RouteReuseStrategy to support custom control of route snapshot caching in Angular. Typical use case is a list page which a user can search for a list of items and then select and navigates to a particular item’s detail pages. When the user clicks the back button in the browser, the Angular app should return to the list page displaying the same items as before, using the previous search criteria.

Other advantages of caching route for rendering is faster page load and reduce network traffic.

To achieve the above in Angular, we need to implement the RouteReuseStrategy to tell Angular not to destroy a component but to save it for re-rendering. There are few blog posts online already with example implementations. This blog will focus more on describing the mechanics of the interface and its methods

RouteReuseStretagy

Below is a skeleton implementation of a custom RouteReuseStrategy:

import { Injectable } from '@angular/core';
import { RouteReuseStrategy, ActivatedRouteSnapshot, DetachedRouteHandle } from '@angular/router';

@Injectable()
export class AppRouteReuseStrategyService implements RouteReuseStrategy {

     handles: {[key: string]: DetachedRouteHandle} = {};

     constructor() { }

     shouldDetach(route: ActivatedRouteSnapshot): boolean {
           // To Be Implemented
     }

     store(route: ActivatedRouteSnapshot, handle: DetachedRouteHandle): void {
           // To Be Implemented
     }

     shouldAttach(route: ActivatedRouteSnapshot): boolean {
           // To Be Implemented
     }

      retrieve(route: ActivatedRouteSnapshot): DetachedRouteHandle {
          // To Be Implemented
      }

      shouldReuseRoute(future: ActivatedRouteSnapshot, curr: ActivatedRouteSnapshot): boolean {
          // To Be Implemented
      }
}

shouldReuseRoute()

This is the first method to consider. If returns true, none of the other methods will be called. For example, when we are already reusing the current route snapshot. Note the future argument refers to the route that you come from previously. For example if the app navigates from item list page to item details page, curr would refer to the route for the item detail page and future to the route for the item list page.

shouldDetach() and store()

If the method shouldRouteReuse returns false, the method shouldDetach will be called to determine whether the current route snapshot should be detached and stored. If it returns true, the store method will be called. A handle to the detached route snapshot (of type DetachedRouteHandle) is provided as argument to the method so it can store it for later use.

Note if a null handle is provided to the method, it should erase the stored value for the input route. See the API documentation here.

Note once a route snapshot is detached, it is the developer’s responsibility to manage its lifecycle and perform any clean up as needed for proper memory management.

shouldAttach() and retrieve()

Similar to above,

If the method shouldReuseRoute returns false, the method shouldAttach will be called to determine if a cached route should be used. If it returns true, the method retrieve will be called to retrieve the saved handle to the detached route previously stored.

Note the shouldAttach method is also a good place to clean up any stored snapshots. For example when a user has logged out or the snapshot has got staled, in which case we should not be rendering the store snapshot. The method should then return false and the stored handle to route snapshot should be removed from storage.

That’s it. Hope above gives some clarity on what the class RouteReuseStrategy does.

Advertisements

Spring for Apache Kafka Quick Start

In this blog, I setup a basic Spring Boot project for developing Kafka based messaging system using Spring for Apache Kafka. The project also includes basic Spring config required for publishing and listening to messages from Kafka broker.

Project Setup

The following tools and versions are used here:

  1. Maven 3.x
  2. Spring Kafka 1.3.2 (current release version)
  3. Kafka client 0.11.0.2
  4. Spring Boot 1.5.9

The current Spring Boot release version (1.5.9) has Spring Kafka version  1.1.7 as the managed version. I have to override this to use 1.3.2. My Maven pom file fragment as below:

 <parent>
      <groupId>org.springframework.boot</groupId>
      <artifactId>spring-boot-starter-parent</artifactId>
      <version>1.5.9.RELEASE</version>
      <relativePath /> <!-- lookup parent from repository -->
 </parent>

 <dependencies>
      <dependency>
           <groupId>org.springframework.kafka</groupId>
           <artifactId>spring-kafka</artifactId>
           <version>1.3.2.RELEASE</version>
      </dependency>
      
      <dependency>
           <groupId>org.springframework.kafka</groupId>
           <artifactId>spring-kafka-test</artifactId>
           <version>1.3.2.RELEASE</version>
           <scope>test</scope>
      </dependency>

      <dependency>
           <groupId>org.apache.kafka</groupId>
           <artifactId>kafka-clients</artifactId>
           <version>0.11.0.2</version>
      </dependency>

Producer Config

Spring Boot provides auto configuration for connecting to Kafka but I find it useful to setup the beans myself. Spring Kafka adopts the same approach to Kafka as in other message brokers such as ActiveMQ. For publishing message a template, KafkaTemplate, as to be configured as with JmsTemplate for ActiveMQ.

The following is my Java Config for a KafkaTemplate to publish message to the Kafka broker

@Configuration
public class KafkaProducerConfig {

     @Value("${spring.kafka.bootstrap-servers}") // (1)
     private String brokerAsString;
 
     @Bean
     public ProducerFactory<Integer, String> producerFactory() {
          return new DefaultKafkaProducerFactory<>(producerConfigs());
     }

     @Bean
     public Map<String, Object> producerConfigs() {
          Map<String, Object> props = new HashMap<>();
          props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, brokerAsString);
          props.put(ProducerConfig.RETRIES_CONFIG, 0);
          props.put(ProducerConfig.BATCH_SIZE_CONFIG, 16384);
          props.put(ProducerConfig.LINGER_MS_CONFIG, 1);
          props.put(ProducerConfig.BUFFER_MEMORY_CONFIG, 33554432);
          props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, IntegerSerializer.class);
          props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
          return props;
      }

     @Bean
     public KafkaTemplate<Integer, String> kafkaTemplate() {
          return new KafkaTemplate<Integer, String>(producerFactory());
     }
}

Note:

  1. The broker address is set using the property spring.kafka.bootstrap-servers defined in the application.properties (or yml) file. For example,
// application.properties
spring.kafka.bootstrap-servers=http://localhost:9092

Consumer Config

Consuming messages from Kafka using Spring Kafka is similar to consuming messages from Active MQ using Spring JMS support. We need to define container factory and message listener. Below is my Java Config for message listener factory.

@Configuration
public class KafkaConsumerConfig {
 
     @Value("${spring.kafka.bootstrap-servers}")
     private String brokerAsString;

     @Value("${spring.kafka.consumer.group-id}")
     private String groupId;
 
     @Value("${spring.kafka.consumer.auto-offset-reset}")
     private String autoOffsetReset;
 
     @Bean
     ConcurrentKafkaListenerContainerFactory<Integer, String> kafkaListenerContainerFactory() {
          ConcurrentKafkaListenerContainerFactory<Integer, String> factory = new ConcurrentKafkaListenerContainerFactory<>();
          factory.setConsumerFactory(consumerFactory());
          return factory;
     }

     @Bean
     public ConsumerFactory<Integer, String> consumerFactory() {
         return new DefaultKafkaConsumerFactory<>(consumerConfigs());
     }

     @Bean
     public Map<String, Object> consumerConfigs() {
         Map<String, Object> props = new HashMap<>();
         props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, brokerAsString);
         props.put(ConsumerConfig.GROUP_ID_CONFIG, groupId);
         props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, autoOffsetReset);
         props.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, true);
         props.put(ConsumerConfig.AUTO_COMMIT_INTERVAL_MS_CONFIG, "100");
         props.put(ConsumerConfig.SESSION_TIMEOUT_MS_CONFIG, "15000");
         props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, IntegerDeserializer.class);
         props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
         return props;
     }
}

Now we can listen to a Kafka topic by using the annotation @KafkaListener. For example

@Service
public class GreetingsTopicListener {

 private Logger logger = LoggerFactory.getLogger(getClass());
 
 @KafkaListener(topics = "greetings")
 public void listen(ConsumerRecord<?,?> cr) throws Exception {
      logger.info(cr.toString());
 }
}

@KafkaListener will use the default listener container factory defined in class ConsumerConfig above to create the message listener. It is also possible to override this by settig the containerFactory attribute in the annotation. See javadoc for more details.

Creating Topics

It is also possible to automatically add topics to the broker by defining @Beans using the new 0.11.0.x client library class AdminClient as in the Spring Kafka reference documentation

@Configuration
public class KafkaTopicConfig {
 
 @Value("${spring.kafka.bootstrap-servers}")
 private String brokerAsString;
 
 @Bean
 public KafkaAdmin admin() {
   Map<String, Object> configs = new HashMap<>();
   configs.put(AdminClientConfig.BOOTSTRAP_SERVERS_CONFIG, brokerAsString);
   return new KafkaAdmin(configs);
 }

 @Bean
 public NewTopic topic1() {
   return new NewTopic("foo", 10, (short) 2);
 }

 @Bean
 public NewTopic topic2() {
   return new NewTopic("bar", 10, (short) 2);
 }
}

That’s about it. The codes included in this blog should be sufficient for setting up a Spring Boot project for a messaging system using Spring Kafka.