Integrate Docker with Maven for Spring Boot projects

This blog will demonstrate how to setup in Maven using a number of plugins to integrate Docker in a Spring Boot Maven project. The objective here is to rebuild the docker image for the project seamlessly whenever Maven is run to build and release a new jar file.

The source codes for this project can be found in here

POM File

The project pom.xml file build life cycle is updated to include 3 plugins as shown below:

<build>
 <plugins>
    <plugin>
       <groupId>org.springframework.boot</groupId>
       <artifactId>spring-boot-maven-plugin</artifactId>
    </plugin>
    <plugin>
       <artifactId>maven-resources-plugin</artifactId>
       <executions>
          <execution>
             <id>copy-resources</id>
             <phase>process-resources</phase>
             <goals>
                 <goal>copy-resources</goal>
             </goals>
             <configuration>
               <outputDirectory>${basedir}/target</outputDirectory>
               <resources>
                  <resource>
                     <directory>src/main/resources/docker</directory>
                     <includes>
                        <include>Dockerfile</include>
                     </includes>
                  </resource>
               </resources>
             </configuration>
           </execution>
        </executions>
     </plugin>
     <plugin>
        <groupId>com.google.code.maven-replacer-plugin</groupId>
        <artifactId>replacer</artifactId>
        <version>1.5.3</version>
        <executions>
            <execution>
               <phase>prepare-package</phase>
               <goals>
                   <goal>replace</goal>
               </goals>
            </execution>
        </executions>
        <configuration>
             <file>${basedir}/target/Dockerfile</file>
             <replacements>
                 <replacement>
                     <token>IMAGE_VERSION</token>
                     <value>${project.version}</value>
                 </replacement>
             </replacements>
         </configuration>
     </plugin>
     <plugin>
         <groupId>com.spotify</groupId>
         <artifactId>dockerfile-maven-plugin</artifactId>
         <version>${version.dockerfile-maven}</version>
    <executions>
       <execution>
       <id>default</id>
       <goals>
          <goal>build</goal>
          <goal>push</goal>
       </goals>
       </execution>
     </executions>
     <configuration>
          <contextDirectory>${project.build.directory}</contextDirectory>
          <repository>image name here</repository>
          <tag>${project.version}</tag>
     </configuration>
    </plugin>
  </plugins>
</build>

The pom file assumes the Dockerfile can be found in the source folder /src/resource/docker. You can define your own Dockerfile as needed for your project. For demo purpose, I am using, with minor modification, the sample Dockerfile found in this blog

FROM frolvlad/alpine-oraclejdk8:slim
VOLUME /tmp
ADD docker-maven-IMAGE_VERSION.jar app.jar
RUN sh -c 'touch /app.jar'
ENV JAVA_OPTS=""
ENTRYPOINT [ "sh", "-c", "java $JAVA_OPTS -Djava.security.egd=file:/dev/./urandom -jar /app.jar" ]

Note the tag IMAGE_VERSION, this will be replaced with the version of the jar file being built

The first step in the build is to copy the Dockerfile above to the /target, i.e. build output, folder using the resource plugin. This is needed for 2 reasons: (1) we need to set the filename to add to the image to match that of the version being built and (2) Docker does not allow in ADD source file outside of the context directory of the Dockerfile so we have to put it in same directory as the jar file.

The second step is to set in the Dockerfile just copied the correct version of the jar file to be included in the docker image. The Maven Replacer plugin is used to replace the tag IMAGE_VERSION in the Dockerfile with the Maven variable project.version.

Finally, we run the Spotify Dockerfile Maven plugin to build/push the docker image. The plugin allows you to set what repository to use. Note we set the tag to be that of the Maven variable project.version as in step 2 to make sure that the image tag matches that of the jar file.

Running Maven

Now whenever the project is build or deploy in Maven using the standard mvn install or mvn deploy, the corresponding docker image will also be build or pushed to the repository. This also works for mvn release:prepare and mvn release:prepare for releasing tag version of the jar file.

Note you would need to setup certificate required to access the Docker daemon. For example, include the following environment variables:

DOCKER_HOST // to <host ip address>
DOCKER_TLS_VERIFY = true
DOCKER_CERT_PATH = C:\Users\<username>\.docker\machine\certs

Consult Docker documentation for more details on secure access to the Docker daemon.

Below is an excerpt of what you would see in a console when running mvn install

[INFO] ------------------------------------------------------------------------
[INFO] Building docker-maven 0.0.1-SNAPSHOT
[INFO] ------------------------------------------------------------------------
...
[INFO] --- maven-resources-plugin:2.6:copy-resources (copy-resources) @ docker-maven ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] Copying 1 resource
[INFO] 
[INFO] --- maven-compiler-plugin:3.1:compile (default-compile) @ docker-maven ---
...
[INFO] --- replacer:1.5.3:replace (default) @ docker-maven ---
[INFO] Replacement run on 1 file.
[INFO] 
[INFO] --- maven-jar-plugin:2.6:jar (default-jar) @ docker-maven ---
[INFO] Building jar: D:\src\blog_docker_maven\target\docker-maven-0.0.1-SNAPSHOT.jar
...
[INFO] Image will be built as <image name here>
[INFO] 
[INFO] Step 1/7 : FROM frolvlad/alpine-oraclejdk8:slim
[INFO] Pulling from frolvlad/alpine-oraclejdk8
...
Advertisements

Using Maven build number plugin to load code revision details

In this blog post, I will demonstrate how to use Maven build number plugin to get build number and version details from your source code repository for use in Spring web applications. I use this technique for a number of web apps I develop to register which version of the codes (i.e. tag) the systems are running on to help operation support and debugging.

Project configuration

I have the following project setup:

  1. Maven 3
  2. Subversion as source code repository
  3. Eclipse IDE with m2e plugin

Getting build number from subversion

1. Add Maven plugin buildnumber-maven-plugin to pom.xml file

First add the following to your pom file:

<plugin>
 <groupId>org.codehaus.mojo</groupId>
 <artifactId>buildnumber-maven-plugin</artifactId>
 <version>1.0</version>
 <executions>
      <execution>
           <phase>validate</phase>
           <goals>
                <goal>create</goal>
           </goals>
      </execution>
 </executions>
 <configuration>
      <doCheck>false</doCheck>
      <doUpdate>false</doUpdate>
      <providerImplementations>
          <svn>javasvn</svn>
      </providerImplementations>
 </configuration>
 <dependencies>
      <dependency>
           <groupId>com.google.code.maven-scm-provider-svnjava</groupId>
           <artifactId>maven-scm-provider-svnjava</artifactId>
           <version>2.0.2</version>
      </dependency>
      <dependency>
           <groupId>org.tmatesoft.svnkit</groupId>
           <artifactId>svnkit</artifactId>
           <version>1.8.6</version>
      </dependency>
 </dependencies>
</plugin>

The above plugin config adds the build number plugin to the build lifecycle. Note the use of the javasvn provider to connect to SVN.

2. Add build property placeholders

The build number plugin is now run every time you build the project and will make build number (e.g. SVN revision number) available to Maven. The next step is to include a properties file so the values can be passed into the web app via Spring. For example, create a build.properties file (e.g. in  src/main/resources) as below and add it to the files to be loaded by Spring property placeholder configurer:

build.properties:
build.version=${version}  # Maven version
build.revision=${buildNumber} # Source code revision number 
build.timestamp=${timestamp} # long value of check in time

Running build should then produce the following files with the values replaced with real ones from subversion, e.g.

build.properties:
build.version=1.2.3
build.revision=9876
build.timestamp=12345459

3. Using build number in web app

Now you can use the build number in your web app. For example, in a Spring MVC controller interceptor to inject the values to the incoming http request for display in the web page footer.

public class PageRequestInterceptor extends HandlerInterceptorAdapter {

@Value(“${build.version}”)
private String buildVersion;

@Value(“${build.revision}”)
private String buildRevision;

private DateTime buildTimestamp;

@Value(“${build.timestamp}”)
public void setBuildTimestamp(String timestamp) {
buildTimestamp = new DateTime(Long.parseLong(timestamp));
}

@Override
public boolean preHandle(HttpServletRequest request, HttpServletResponse response, Object handler) throws Exception {

request.setAttribute(“buildVersion”, buildVersion);
request.setAttribute(“buildRevision”, buildRevision);
request.setAttribute(“buildDateTime”, buildTimestamp);

4. Fix up Eclipse m2e plugin lifecycle mapping

If you are using Eclipse, you also need to update the Maven plugin lifecycle mapping to enable the build number plugin. Otherwise, Eclipse auto build will overwrite the resolved build properties file created by the build number plugin, e.g. in /target/classes, with the one in your source path. Add the below in the pom file as child to the <build> tag:

<pluginManagement>
 <plugins>
 <!-- This plugin's configuration is used to store Eclipse m2e settings only. It has no influence on the Maven build itself. -->
      <plugin>
           <groupId>org.eclipse.m2e</groupId>
           <artifactId>lifecycle-mapping</artifactId>
           <version>1.0.0</version>
           <configuration>
                <lifecycleMappingMetadata>
                     <pluginExecutions>
                        <pluginExecution>
                            <pluginExecutionFilter>
                            <groupId>org.codehaus.mojo</groupId>
                            <artifactId>buildnumber-maven-plugin</artifactId>
                            <versionRange>[1.0,)</versionRange>
                            <goals>
                                <goal>create</goal>
                            </goals>
                            </pluginExecutionFilter>
                            <action>
                                <execute>
                                  <runOnIncremental>true</runOnIncremental>
                                </execute>
                            </action>
                       </pluginExecution>
                </pluginExecutions>
             </lifecycleMappingMetadata>
          </configuration>
      </plugin>
   </plugins>
 </pluginManagement>

You can verify the above is working by looking at the Project Maven lifecycle properties in Eclipse. RIght click on the project, the select Properties->Maven->Lifecycle Mapping

That’s it.

Migrating a Java web app for deploy to AWS Elastic Beanstalk

This blog documents my experience in converting a Java web project in Eclipse. My start point is a Maven Java web project with a MySQL backend. My objective is to be able to deploy the site as a AWS Elastic Beanstalk application, using the AWS Toolkit for Eclipse.

Download and install AWS Eclipse Toolkit

The first step is to download and install AWS Eclipse Toolkit by following the documentation here.

Convert project into AWS Java web project

In order to deploy a web project to AWS, one can use the Eclipse Toolkit to create a AWS Java web project and work from there. For an already existing project, the following manual steps are needed.

First, create a dummy project by clicking on the AWS icon on the Eclipse toolbar, and then click on New AWS Java Web Project. Select the option to create the  Travel log sample web application. We only need the Eclipse WST settings of the project. Copy the following files from the .settings folder of the newly created project into the .settings folder of the maven web project:

  1. org.eclipse.wst.common.component
  2. org.eclipse.wst.common.project.facet.core.xml

Now modify the first file to use the maven project folder structure. For example, from

<?xml version="1.0" encoding="UTF-8"?>
<project-modules id="moduleCoreId" project-version="1.5.0">
 <wb-module deploy-name="aws-template">
 <wb-resource deploy-path="/" source-path="/WebContent" tag="defaultRootSource"/>
 <wb-resource deploy-path="/WEB-INF/classes" source-path="/src"/>
 <property name="context-root" value="aws-template"/>
 <property name="java-output-path" value="/aws-template/build/classes"/>
 </wb-module>
</project-modules>

to

<?xml version="1.0" encoding="UTF-8"?>
<project-modules id="moduleCoreId" project-version="1.5.0">
 <wb-module deploy-name="myblog">
 <wb-resource deploy-path="/WEB-INF/classes" source-path="/src"/>
 <wb-resource deploy-path="/" source-path="/target/myblog"/>
 <property name="context-root" value="myblog"/>
 <property name="java-output-path" value="/myblog/build/classes"/>
 </wb-module>
</project-modules>

Externalize JDBC properties for RDS

My project uses the bean post processor class org.springframework.beans.factory.config.PropertyPlaceholderConfigurer to load jdbc connection properties from a properties file to config the data source:

<bean id="dataSource" class="org.apache.commons.dbcp.BasicDataSource"
 destroy-method="close">
 <property name="driverClassName">
 <value>net.sf.log4jdbc.DriverSpy</value>
 </property>
 <property name="url">
 <value>${db.url}</value>
 </property>
 <property name="username">
 <value>${username}</value>
 </property>
 <property name="password">
 <value>${password}</value>
 </property>
 </bean>

and the properties file

# jdbc.properties
db.url=jdbc:log4jdbc:mysql://localhost:3306/blog
username=<app-user>
password=<app-password>

Instead of using a property file, the JDBC connection properties need to be externalised to be passed to the Elastic Beanstalk’s container (Tomcat). AWS Elastic Beanstalk provides a number of environment parameters for this and other purposes. You can find it by clicking the Environment Details -> Edit Configuration of the Elastic Beanstalk application environment and then choose the Container tab. Under the header Environment Properties, you will find a number of properties to use.  Below are the properties I use to store the JDBC properties:

AWS_ACCESS_KEY_ID – username

AWS_SECRET_KEY- password

JDBC_CONNECTION_STRING – JDBC connection string, e.g. jdbc:log4jdbc:mysql://<rds endpoint url>:3306/ebdb

Note by default, AWS Elastic Beanstalk use ebdb as the database name.

Update Spring bean definitions of datasource bean

The only thing left is to update the bean definition of the datasource using the above environment parameters to

<bean id="dataSource" class="org.apache.commons.dbcp.BasicDataSource"
 destroy-method="close">
 <property name="driverClassName">
 <value>net.sf.log4jdbc.DriverSpy</value>
 </property>
 <property name="url">
 <value>${JDBC_CONNECTION_STRING}</value>
 </property>
 <property name="username">
 <value>${AWS_ACCESS_KEY_ID}</value>
 </property>
 <property name="password">
 <value>${AWS_SECRET_KEY}</value>
 </property>
 </bean>

That’s it, the PropertyPlaceholderConfigurer class will resolve the properties using the JVM environment properties passed into the Tomcat container. The jdbc.properties file is no longer used and can be deleted.

As an aside, My project uses jetty locally and I have to add the following to the JVM properties of the Eclipse Run Configuration (under JRE->VM arguments)

-DJDBC_CONNECTION_STRING=jdbc:log4jdbc:mysql://localhost:3306/blog
-DAWS_ACCESS_KEY_ID=root
-DAWS_SECRET_KEY=root

A configurable framework for implementing full text search using Hibernate Search

In this blog, I will describe a database driven framework that I implement to perform full text search of Hibernate entities using the Hibernate Search project.

Setup Hibernate Search

Including Hibernate Search in your project is straight forward. Follow the instruction in the documentation here. I add the following  Maven dependencies:

 <dependency>
      <groupId>org.hibernate</groupId>
      <artifactId>hibernate-search</artifactId>
      <version>4.1.1.Final</version>
 </dependency>
 <!-- Additional Analyzers: -->
 <dependency>
      <groupId>org.hibernate</groupId>
      <artifactId>hibernate-search-analyzers</artifactId>
      <version>4.1.1.Final</version>
 </dependency>
 <!-- Infinispan integration: -->
 <dependency>
      <groupId>org.hibernate</groupId>
      <artifactId>hibernate-search-infinispan</artifactId>
      <version>4.1.1.Final</version>
 </dependency>

The framework

Domain layer

The “configurable” part of the framework is built around a SearchPreference class as shown below

@Entity
@Indexed
public class SearchPreference extends AbstractEntity {

@Field(index=Index.YES, analyze=Analyze.NO, store=Store.NO)
private String entityName;

private String propertyName;

@Enumerated(EnumType.STRING)
@Field(index=Index.YES, analyze=Analyze.NO, store=Store.NO)
private SearchType searchType;

private BoolType boolType;

@Id
@GeneratedValue(strategy = GenerationType.AUTO)
private long id;

@Override
public long getId() {
return id;
}

// rest of codes omitted here

Note:

(1) The class SearchPreference is used to store the full text search config of an indexed property of the target Hibernate entity. It is itself a Hibernate entity for persistence to the database and has the following properties:

  • entityName – fully qualified class name of the target Hibernate entity
  • propertyName – name of an indexed property of the target Hibernate entity
  • searchType – an enum to define the type of search for this field, i.e. match, wildcard, phase
  • boolType – an enum to define the mode of aggregation of the subquery (must, should)

(2) The class AbstractEntity is an abstract entity class to be extended by all Hibernate entity classes.

(3)  A Hibernate entity is associated with multiple SearchPreference objects for full text search across multiple fields.

Repository layer

The repo layer consists of the following interfaces:

public interface ISearchPreferenceRepository extends IEntityRepository<SearchPreference> {

List<SearchPreference> getSearchFieldsFor(Class<? extends AbstractEntity> entityClass);
}

and

public interface IEntityRepository<T extends AbstractEntity> {

List<T> search(String keyword, int firstResult, int fetchSize, SearchPreference… fields);

}

The interface ISearchPreferenceRepository provides the method getSearchFieldsFor() to load the search field configs as a list of SearchPreference objects using HQL, with one for each field to be queried for. The implementation is shown below.

@Transactional
@Repository(“searchPreferenceRepository”)
public class SearchPreferenceRepository extends AbstractEntityRepository<SearchPreference> implements ISearchPreferenceRepository {

 private static final String hql = “FROM ” + SearchPreference.class.getName() + ” WHERE entityName=?”;

@Autowired(required=true)
public SearchPreferenceRepository(SessionFactory sessionFactory) {
super(sessionFactory);
}

public List<SearchPreference> getSearchFieldsFor(Class<? extends AbstractEntity> entityClass) {
Query hqlQuery = sessionFactory.getCurrentSession().createQuery(hql);
hqlQuery.setParameter(0, ClassUtils.getShortClassName(entityClass));
return hqlQuery.list();
}

}

The actual full text search is done via the search() method of the IEntityRepository interface. Below is the implementation of the method:

public List<ENTITY> search(String keyword, int firstResult, int fetchSize, SearchPreference… fields) {

Session session = sessionFactory.getCurrentSession();

FullTextSession fullTextSession = Search.getFullTextSession(session);

QueryBuilder qb = fullTextSession.getSearchFactory().buildQueryBuilder().forEntity(getType()).get(); // [1]

BooleanJunction boolJn = qb.bool();

List<Query> subqueries = new ArrayList<Query>();

for (int i = 0; i < fields.length; i++) {

SearchPreference field = fields[i];

Query fieldQuery = null;

if (SearchType.MATCH.equals(field.getSearchType())) { // [2]

fieldQuery = qb.keyword().onField(field.getPropertyName()).matching(keyword).createQuery();

} else if (SearchType.WILDCARD.equals(field.getSearchType())) {

fieldQuery = qb.keyword().wildcard().onField(field.getPropertyName()).matching(keyword).createQuery();

}

subqueries.add(fieldQuery);

if (BoolType.MUST == field.getBoolType()) { //[3]

boolJn = boolJn.must(fieldQuery);

}

if (BoolType.SHOULD == field.getBoolType()) {

boolJn = boolJn.should(fieldQuery);

}

}

Query query = boolJn.createQuery(); //[4]

// wrap Lucene query in a org.hibernate.Query

org.hibernate.Query hibQuery = fullTextSession.createFullTextQuery(query, getType()); // [5]

hibQuery.setFirstResult(firstResult);

hibQuery.setFetchSize(fetchSize);

// execute search

List result = hibQuery.list(); // [6]

return result;

}

Its a rather long method but basically what it does is to create a lucene search query via Hibernate Search API using the SearchPreference objects in the input arguments. A Hibernate query object is then created and executed to generate the search results.

Note:

[1] – Create a DSL query builder of type QueryBuilder for the entity type.

[2] – For each field of type SearchPreference, a sub query is created for matching the input search string keyword with the query type (i.e. match, wildcard) matching the searchType property of that in the SearchPreference field.

[3] – The subqueries created in [2] are aggregated here. Currently 2 aggregation operations are supported:

  • SHOULD: the query should contain the matching elements of the subquery
  • MUST: the query must contain the matching elements of the subquery

[4] – The final query object is created.

[5] – The query object in [4] of type org.apache.lucene.search.Query is converted into a core Hibernate query object by the full text session.

[6] – Finally, the query is executed to return search results.

Service layer

The 2 repositories ISearchPreferenceRepository and IEntityRepository are used together to implement a generic full text search service. The implementation of this layer is straight forward is not included here.

Example

Let say we have an entity Product:

@Entity
@Indexed
public class Product extends AbstractEntity {

@Id
@GeneratedValue(strategy=GenerationType.AUTO)
private long id;

@Field(index=Index.YES, analyze=Analyze.YES, store=Store.NO)
private String sku;

@Field(index=Index.YES, analyze=Analyze.YES, store=Store.NO)
private String name;

@Field(index=Index.YES, analyze=Analyze.YES, store=Store.NO)
private String description;

… // getters and setters

Note the properties (sku, description, name) are indexed using Hibernate Search annotations. They are required for the entity to be queried via full text search. Let say we have the following data in the product data (mapped to by Hibernate by default):

sku name description
sku_1 name_1 product desc 1
sku_2 name_2 product desc 2
sku_3 name_3 product desc 3

The following codelet of an unit test class demonstrates how the repository classes can be used to perform various full text search of the product entity:

public class ProductRepositoryTest {
@Autowired
 @Qualifier("productRepository")
 private IEntityRepository<Product> repository;
@Test
 public void testSearchAndMatch() throws Exception {
      SearchPreference[] fieldPref = new SearchPreference[2];
      fieldPref[0] = createPref(Product.class.getName(), "sku", SearchType.MATCH, BoolType.MUST);
      fieldPref[1] = createPref(Product.class.getName(), "description", SearchType.MATCH, BoolType.MUST);
      List<Product> results = repository.search("sku_1", 0, 100, fieldPref);
      assertEquals(0, results.size()); // search returns no result as description MUST match keyword "sku_1"
 }
@Test
 public void testSearchOrMatch() throws Exception {
      SearchPreference[] fieldPref = new SearchPreference[2];
      fieldPref[0] = createPref(Product.class.getName(), "sku", SearchType.MATCH, BoolType.SHOULD);
      fieldPref[1] = createPref(Product.class.getName(), "description", SearchType.MATCH, BoolType.SHOULD);
      List<Product> results = repository.search("sku_1*", 0, 100, fieldPref);
      assertTrue(results.size() > 0); // search returns results where sku matches keyword "sku_1", no result from matching description field
 }
@Test
 public void testSearchAndWildcard() throws Exception {
      SearchPreference[] fieldPref = new SearchPreference[2];
      fieldPref[0] = createPref(Product.class.getName(), "sku", SearchType.WILDCARD, BoolType.MUST);
      fieldPref[1] = createPref(Product.class.getName(), "description", SearchType.WILDCARD, BoolType.MUST);
      List<Product> results = repository.search("*1*", 0, 100, fieldPref);
      assertTrue(results.size() > 0); // search return results where both sku and description matches wildcard "*1*", i.e. "sku_1" and "product desc 1"
 }
@Test
 public void testSearchOrWildcard() throws Exception {
      SearchPreference[] fieldPref = new SearchPreference[2];
      fieldPref[0] = createPref(Product.class.getName(), "sku", SearchType.WILDCARD, BoolType.SHOULD);
      fieldPref[1] = createPref(Product.class.getName(), "description", SearchType.WILDCARD, BoolType.SHOULD);
      List<Product> results = repository.search("sku_*", 0, 100, fieldPref);
      assertTrue(results.size() > 0); // search return results where sku matches wildcard "sku_*", no result from matching description field
 }
         // ...

Note the method createPref() returns an instance of SearchPreference object constructed using the method’s input argument. The codes are omitted above. Refer to the highlighted comments of each test for explanation.

That’s it for now.

Setup Jetty/Maven for developing web applications in Eclipse

This article describes how to set up the development environment in Eclipse to run/debug web applications for Maven based projects.

Install

First you would need to add Eclipse plugin to run Maven tasks. I use the Maven Integration [2]. Follow the instruction in the web site on how to install the plugin in Eclipse.

Second, add jetty plugin to your project’s pom.xml file. For example

<plugin>
<groupId>org.mortbay.jetty</groupId>
<artifactId>maven-jetty-plugin</artifactId>
<configuration>
<scanIntervalSeconds>10</scanIntervalSeconds>
<stopPort>9966</stopPort>
<stopKey>foo</stopKey>

</configuration>
</plugin>

More configuration options can be found in the plugin documentation [2].

Config and Run

Now you are ready to configure Eclipse to run/debug your web application on Jetty.

Open Eclipse’s Run Configuration and create a new configuration under Maven Build and enter jetty:run in Goal field. Running the configuration will start a Jetty instance on localhost:8080.

To debug jetty, open Eclipse’s Run Configuration and create another new configuration under Maven Build. Set the Goal value to jetty:run as before.  Then enter the following in the VM arguments field under the JRE tab:

-Xdebug -Xnoagent -Djava.compiler=NONE -Xrunjdwp:transport=dt_socket,address=4000,server=y,suspend=y

Now create a new Debug Configuration under Remove Java Application. Make sure the port number is identical to the value set above (i.e. 4000).

To run Jetty in debug mode, first start the jetty:run run configuration. The following will be displayed in the Eclipse Console:

Listening for transport dt_socket at address: 4000

Now start the debug configuration and Jetty should be started.

To stop jetty, create and run a Run Configuration under Maven Build with Goal jetty:stop.

More Info

  1. Jetty – http://jetty.codehaus.org/jetty/
  2. Maven Integration m2e plugins – http://www.eclipse.org/m2e/
  3. Jetty Maven plugin – http://docs.codehaus.org/display/JETTY/Maven+Jetty+Plugin