Handle S3 Events with AWS Lambda

This blog will describe how to implement a AWS Lambda function using the AWS Java SDK to be triggered when an object is created in a given S3 bucket. The Lambda function will publish a message to a SQS destination based on the name of the object.

Project Setup

First, we need to pull in the project dependencies for Lambda and S3 in Maven.

<dependency>
	<groupId>com.amazonaws</groupId>
	<artifactId>aws-lambda-java-core</artifactId>
	<version>1.1.0</version>
</dependency>
<dependency>
	<groupId>com.amazonaws</groupId>
	<artifactId>aws-lambda-java-events</artifactId>
	<version>1.3.0</version>
</dependency>
<dependency>
	<groupId>com.amazonaws</groupId>
	<artifactId>aws-java-sdk-s3</artifactId>
	<version>1.11.549</version>
</dependency>

Implementing the Handler

Now we can implement the lambda function to listen to S3Event:

public class S3ObjectCreateHandler {

     // The following Environment variables should be setup in Lambda
     private static final String ENV_SQS_DESTINATION_IMAGES = "sqs_destination_images";
     private static final String ENV_SQS_DESTINATION_VIDEOS = "sqs_destination_videos";

     public Object handleRequest(S3Event input, Context context) {

         JmsTemplate jmsTemplate = ... // get a jms template for publishing to SQS via JMS

         for (S3EventNotificationRecord record : input.getRecords()) {
              String s3Key = record.getS3().getObject().getKey();
              String s3Bucket = record.getS3().getBucket().getName();
              String destination = getDestinationFromKey(s3Key);

              context.getLogger().log("found id: " + s3Bucket + " " + s3Key);

              if (destination == null) {
                   context.getLogger().log("Environment variable not setup for key:" + s3Key);
              } else {
                   jmsTemplate.convertAndSend(destination, new S3ObjectCreateMessage(s3Bucket, s3Key));
              }
          }
          return "success";
    }

    private String getDestinationFromKey(String key) {
	if (key.endsWith(".jpeg")) {
		return System.getenv(ENV_SQS_DESTINATION_IMAGES);
	} else if (key.endsWith(".mp4")) {
		return System.getenv(ENV_SQS_DESTINATION_VIDEOS);
	}
	return null;
    }
}

The codes above are rather straight forward. The handler function takes a S3Event input and iterates each record of the event to get the object for processing. As an example, each object is sorted based on its suffix and a message is sent to a respective SQS destination via JMS. For how to setup JMS to connect to SQS, refer to my previous post here.

Deploy to AWS

To deploy the above codes to Lambda using the AWS Console.

Add S3 Triggers

Add the triggers for S3 by drag and drop S3 from the Designer on the left and then set the configuration as required. You should end up something like below

lambdaS3

Add Function Codes

Build and upload the jar file and set the name of the Handler function for example

au.com.myblog.awslambda.S3ObjectCreateHandler::handleRequest

Under the Environment variables section, add the key value pairs for the SQS destinations:

lambdaEnv

Update Execution role

As we publish messages to SQS, we need to update the execution role to have permission to do so. For example, we can attach the following policy to the IAM role

{
     "Version": "2012-10-17",
     "Statement": [
          {
             "Sid": "VisualEditor0",
             "Effect": "Allow",
             "Action": [
                 "sqs:SendMessage",
                  ...
              ],
              "Resource": "arn:aws:sqs:<region>:<aws account number>:queue*"
           }
      ]
}

That’s it. You can now test the Lambda function by uploading a file into your S3 bucket.

Can’t Save S3 trigger with “Configuration is ambiguously defined…”

One last note. You may encounter the above if you play with different S3 trigger options (prefix, suffix) or add/delete triggers with same bucket. The easiest way to fix this is go to your S3 bucket, go to the Properties tab, scroll down to Events under Advanced Settings. You can delete any unwanted notifications here. Afterwards, you should be able to save your new S3 triggers in Lambda.