The microprofile-lra quickstart demonstrates the use of the MicroProfile LRA specification in WildFly.

What is it?

MicroProfile LRA specification aims to provide an API that the applications utilize to cooperate actions in distributed transactions based on the saga pattern. The user applications enlist within the LRA which in turn notifies all enlisted participants about the LRA (transaction) outcome. The saga pattern provides different transactional guarantees than ACID transactions. Saga allows individual operations to execute right when they are invoked. Meaning together with the enlistment in the LRA. It also requires each participant to define a compensating action which is a semantic undo of the original operation. Note that this doesn’t need to be opposite action. The compensation is required to put the state of the system into the semantically same state as before the action invocation, not exactly same. If your action is for instance sending an email, your compensation might be another email cancelling previous email.

If all actions execute successfully, the LRA is closed and the optional Complete callbacks are invoked on enlisted participants. If any action fails, then the LRA is cancelled and all compensation actions (Compensate callbacks) of all enlisted participants are invoked. The state of the system is said to be eventually consistent, since if we don’t start any new LRAs, the state is bound to become consistent eventually.

The implementation used in the WildFly is provided by the Narayana project.

Architecture

In this quickstart, we have a simple REST application that exposes several REST endpoints that enlist the application as different LRA participants and provide callbacks for completions and compensations respectively. It’s REST API consists of the following endpoints:

  • GET /participant1/work - work action of Participant 1

  • GET /participant2/work - work action of Participant 2

  • PUT /participant1/compensate - compensating action of Participant 1

  • PUT /participant2/compensate - compensating action of Participant 2

  • PUT /participant1/complete - complete action of Participant 1

  • PUT /participant2/complete - complete action of Participant 2

System Requirements

The application this project produces is designed to be run on WildFly Application Server 33 or later.

All you need to build this project is Java 11.0 (Java SDK 11) or later and Maven 3.6.0 or later. See Configure Maven to Build and Deploy the Quickstarts to make sure you are configured correctly for testing the quickstarts.

Use of the WILDFLY_HOME and QUICKSTART_HOME Variables

In the following instructions, replace WILDFLY_HOME with the actual path to your WildFly installation. The installation path is described in detail here: Use of WILDFLY_HOME and JBOSS_HOME Variables.

When you see the replaceable variable QUICKSTART_HOME, replace it with the path to the root directory of all of the quickstarts.

Back Up the WildFly Standalone Server Configuration

Before you begin, back up your server configuration file.

  1. If it is running, stop the WildFly server.

  2. Back up the WILDFLY_HOME/standalone/configuration/standalone-microprofile.xml file.

After you have completed testing this quickstart, you can replace this file to restore the server to its original configuration.

Start the WildFly Standalone Server

  1. Open a terminal and navigate to the root of the WildFly directory.

  2. Start the WildFly server with the MicroProfile profile by typing the following command.

    $ WILDFLY_HOME/bin/standalone.sh -c standalone-microprofile.xml
    Note
    For Windows, use the WILDFLY_HOME\bin\standalone.bat script.

Configure the Server

You can configure the LRA extensions and subsystems (both for LRA coordinator and LRA participant respectively) by running CLI commands. For your convenience, this quickstart batches the commands into a enable-microprofile-lra.cli script provided in the root directory of this quickstart.

  1. Before you begin, make sure you do the following:

  2. Review the enable-microprofile-lra.cli file in the root of this quickstart directory. It enables two extensions and adds two subsystems, one for LRA coordinator and one for LRA participant respectively.

  3. Open a new terminal, navigate to the root directory of this quickstart, and run the following command, replacing WILDFLY_HOME with the path to your server:

    $ WILDFLY_HOME/bin/jboss-cli.sh --connect --file=enable-microprofile-lra.cli
    Note
    For Windows, use the WILDFLY_HOME\bin\jboss-cli.bat script.

    You should see the following result when you run the script:

    The batch executed successfully
  4. Stop the WildFly server.

Review the Modified Server Configuration

After stopping the server, open the WILDFLY_HOME/standalone/configuration/standalone/configuration/standalone-microprofile.xml file and review the changes.

  1. The script added the following two extensions:

    <extension module="org.wildfly.extension.microprofile.lra-coordinator"/>
    <extension module="org.wildfly.extension.microprofile.lra-participant"/>
  2. And also the following two subsystems:

    <subsystem xmlns="urn:wildfly:microprofile-lra-coordinator:1.0"/>
    <subsystem xmlns="urn:wildfly:microprofile-lra-participant:1.0"/>

Solution

We recommend that you follow the instructions that create the application step by step. However, you can also go right to the completed example which is available in this directory.

Build and Deploy the Quickstart

  1. Make sure WildFly server is started.

  2. Open a terminal and navigate to the root directory of this quickstart.

  3. Type the following command to build the quickstart.

    $ mvn clean package
  4. Type the following command to deploy the quickstart.

    $ mvn wildfly:deploy

This deploys the microprofile-lra/target/microprofile-lra.war to the running instance of the server.

You should see a message in the server log indicating that the archive deployed successfully.

Run the Integration Tests

This quickstart includes integration tests, which are located under the src/test/ directory. The integration tests verify that the quickstart runs correctly when deployed on the server.

Follow these steps to run the integration tests.

  1. Make sure WildFly server is started.

  2. Make sure the quickstart is deployed.

  3. Type the following command to run the verify goal with the integration-testing profile activated.

    $ mvn verify -Pintegration-testing 

Undeploy the Quickstart

When you are finished testing the quickstart, follow these steps to undeploy the archive.

  1. Make sure WildFly server is started.

  2. Open a terminal and navigate to the root directory of this quickstart.

  3. Type this command to undeploy the archive:

    $ mvn wildfly:undeploy

Restore the WildFly Standalone Server Configuration

You can restore the original server configuration using either of the following methods.

Restore the WildFly Standalone Server Configuration by Running the JBoss CLI Script

  1. Start the WildFly server as described above.

  2. Open a new terminal, navigate to the root directory of this quickstart, and run the following command, replacing WILDFLY_HOME with the path to your server:

    $ WILDFLY_HOME/bin/jboss-cli.sh --connect --file=restore-configuration.cli
    Note
    For Windows, use the WILDFLY_HOME\bin\jboss-cli.bat script.

This script removes the added extensions and subsystems for the LRA participant and the LRA coordinator.

The batch executed successfully
process-state: reload-required

Restore the WildFly Standalone Server Configuration Manually

When you have completed testing the quickstart, you can restore the original server configuration by manually restoring the backup copy the configuration file.

  1. If it is running, stop the WildFly server.

  2. Replace the WILDFLY_HOME/standalone/configuration/standalone-microprofile.xml file with the backup copy of the file.

Building and running the quickstart application in a bootable JAR

You can use the WildFly JAR Maven plug-in to build a WildFly bootable JAR to run this quickstart.

The quickstart pom.xml file contains a Maven profile named bootable-jar which configures the bootable JAR building:

      <profile>
          <id>bootable-jar</id>
          <build>
              <plugins>
                  <plugin>
                      <groupId>org.wildfly.plugins</groupId>
                      <artifactId>wildfly-maven-plugin</artifactId>
                      <configuration>
                          <discover-provisioning-info>
                              <version>${version.server}</version>
                          </discover-provisioning-info>
                          <bootable-jar>true</bootable-jar>
                          <!--
                            Rename the output war to ROOT.war before adding it to the server, so that the
                            application is deployed in the root web context.
                          -->
                          <name>ROOT.war</name>
                          <add-ons>...</add-ons>
                      </configuration>
                      <executions>
                          <execution>
                              <goals>
                                  <goal>package</goal>
                              </goals>
                          </execution>
                      </executions>
                  </plugin>
                  ...
              </plugins>
          </build>
      </profile>

The plugin uses WildFly Glow to discover the feature packs and layers required to run the application, and provisions a server containing those layers.

If you get an error or the server is missing some functionality which cannot be auto-discovered, you can download the WildFly Glow CLI and run the following command to see more information about what add-ons are available:

wildfly-glow show-add-ons
Procedure
  1. Build the quickstart bootable JAR with the following command:

    $ mvn clean package -Pbootable-jar
  2. Run the quickstart application contained in the bootable JAR:

    $ java -jar target/microprofile-lra-bootable.jar
  3. You can now interact with the quickstart application.

Note

After the quickstart application is deployed, the bootable JAR includes the application in the root context. Therefore, any URLs related to the application should not have the /microprofile-lra path segment after HOST:PORT.

Run the Integration Tests with a bootable jar

The integration tests included with this quickstart, which verify that the quickstart runs correctly, may also be run with a bootable jar.

Follow these steps to run the integration tests.

  1. Make sure the bootable jar is provisioned.

    $ mvn clean package -Pbootable-jar
  2. Start the WildFly bootable jar, this time using the WildFly Maven Jar Plugin, which is recommend for testing due to simpler automation.

    $ mvn wildfly:start-jar
  3. Type the following command to run the verify goal with the integration-testing profile activated, and specifying the quickstart’s URL using the server.host system property, which for a bootable jar by default is http://localhost:8080.

    $ mvn verify -Pintegration-testing -Dserver.host=http://localhost:8080
  4. Shutdown the WildFly bootable jar, this time using the WildFly Maven Jar Plugin too.

    $ mvn wildfly:shutdown

Building and running the quickstart application with OpenShift

Build the WildFly Source-to-Image (S2I) Quickstart to OpenShift with Helm Charts

On OpenShift, the S2I build with Apache Maven uses an openshift Maven profile to provision a WildFly server, deploy and run the quickstart in OpenShift environment.

The server provisioning functionality is provided by the WildFly Maven Plugin, and you may find its configuration in the quickstart pom.xml:

        <profile>
            <id>openshift</id>
            <build>
                <plugins>
                    <plugin>
                        <groupId>org.wildfly.plugins</groupId>
                        <artifactId>wildfly-maven-plugin</artifactId>
                        <configuration>
                            <discover-provisioning-info>
                                <version>${version.server}</version>
                                <context>cloud</context>
                            </discover-provisioning-info>
                            <!--
                                The parent POM's 'openshift' profile renames the output archive to ROOT.war so that the
                                application is deployed in the root web context. Add ROOT.war to the server.
                            -->
                            <filename>ROOT.war</filename>
                            <add-ons>...</add-ons>
                        </configuration>
                        <executions>
                            <execution>
                                <goals>
                                    <goal>package</goal>
                                </goals>
                            </execution>
                        </executions>
                    </plugin>
                    ...
                </plugins>
            </build>
        </profile>

You may note that unlike the provisioned-server profile it uses the cloud context which enables a configuration tuned for OpenShift environment.

The plugin uses WildFly Glow to discover the feature packs and layers required to run the application, and provisions a server containing those layers.

If you get an error or the server is missing some functionality which cannot be auto-discovered, you can download the WildFly Glow CLI and run the following command to see more information about what add-ons are available:

wildfly-glow show-add-ons

Getting Started with WildFly for OpenShift and Helm Charts

This section contains the basic instructions to build and deploy this quickstart to WildFly for OpenShift or WildFly for OpenShift Online using Helm Charts.

Prerequisites

  • You must be logged in OpenShift and have an oc client to connect to OpenShift

  • Helm must be installed to deploy the backend on OpenShift.

Once you have installed Helm, you need to add the repository that provides Helm Charts for WildFly.

$ helm repo add wildfly https://docs.wildfly.org/wildfly-charts/
"wildfly" has been added to your repositories
$ helm search repo wildfly
NAME                    CHART VERSION   APP VERSION     DESCRIPTION
wildfly/wildfly         ...             ...            Build and Deploy WildFly applications on OpenShift
wildfly/wildfly-common  ...             ...            A library chart for WildFly-based applications

Deploy the WildFly Source-to-Image (S2I) Quickstart to OpenShift with Helm Charts

Log in to your OpenShift instance using the oc login command. The backend will be built and deployed on OpenShift with a Helm Chart for WildFly.

Navigate to the root directory of this quickstart and run the following command:

$ helm install microprofile-lra -f charts/helm.yaml wildfly/wildfly --wait --timeout=10m0s 
NAME: microprofile-lra
...
STATUS: deployed
REVISION: 1

This command will return once the application has successfully deployed. In case of a timeout, you can check the status of the application with the following command in another terminal:

oc get deployment microprofile-lra

The Helm Chart for this quickstart contains all the information to build an image from the source code using S2I on Java 17:

build:
  uri: https://github.com/wildfly/quickstart.git
  ref: 33.0.0.Final
  contextDir: microprofile-lra
deploy:
  replicas: 1
  route:
    tls:
      enabled: false

This will create a new deployment on OpenShift and deploy the application.

If you want to see all the configuration elements to customize your deployment you can use the following command:

$ helm show readme wildfly/wildfly

Get the URL of the route to the deployment.

$ oc get route microprofile-lra -o jsonpath="{.spec.host}"

Access the application in your web browser using the displayed URL.

Note

The Maven profile named openshift is used by the Helm chart to provision the server with the quickstart deployed on the root web context, and thus the application should be accessed with the URL without the /microprofile-lra path segment after HOST:PORT.

Run the Integration Tests with OpenShift

The integration tests included with this quickstart, which verify that the quickstart runs correctly, may also be run with the quickstart running on OpenShift.

Note

The integration tests expect a deployed application, so make sure you have deployed the quickstart on OpenShift before you begin.

Run the integration tests using the following command to run the verify goal with the integration-testing profile activated and the proper URL:

$ mvn verify -Pintegration-testing -Dserver.host=http://$(oc get route microprofile-lra --template='{{ .spec.host }}') 
Note

The tests are using SSL to connect to the quickstart running on OpenShift. So you need the certificates to be trusted by the machine the tests are run from.

Undeploy the WildFly Source-to-Image (S2I) Quickstart from OpenShift with Helm Charts

$ helm uninstall microprofile-lra

Building and running the quickstart application with Kubernetes

Build the WildFly Quickstart to Kubernetes with Helm Charts

For Kubernetes, the build with Apache Maven uses an openshift Maven profile to provision a WildFly server, suitable for running on Kubernetes.

The server provisioning functionality is provided by the WildFly Maven Plugin, and you may find its configuration in the quickstart pom.xml:

        <profile>
            <id>openshift</id>
            <build>
                <plugins>
                    <plugin>
                        <groupId>org.wildfly.plugins</groupId>
                        <artifactId>wildfly-maven-plugin</artifactId>
                        <configuration>
                            <discover-provisioning-info>
                                <version>${version.server}</version>
                                <context>cloud</context>
                            </discover-provisioning-info>
                            <!--
                                The parent POM's 'openshift' profile renames the output archive to ROOT.war so that the
                                application is deployed in the root web context. Add ROOT.war to the server.
                            -->
                            <filename>ROOT.war</filename>
                            <add-ons>...</add-ons>
                        </configuration>
                        <executions>
                            <execution>
                                <goals>
                                    <goal>package</goal>
                                </goals>
                            </execution>
                        </executions>
                    </plugin>
                    ...
                </plugins>
            </build>
        </profile>

You may note that unlike the provisioned-server profile it uses the cloud context which enables a configuration tuned for Kubernetes environment.

The plugin uses WildFly Glow to discover the feature packs and layers required to run the application, and provisions a server containing those layers.

If you get an error or the server is missing some functionality which cannot be auto-discovered, you can download the WildFly Glow CLI and run the following command to see more information about what add-ons are available:

wildfly-glow show-add-ons

Getting Started with Kubernetes and Helm Charts

This section contains the basic instructions to build and deploy this quickstart to Kubernetes using Helm Charts.

Install Kubernetes

In this example we are using Minikube as our Kubernetes provider. See the Minikube Getting Started guide for how to install it. After installing it, we start it with 4GB of memory.

minikube start --memory='4gb'

The above command should work if you have Docker installed on your machine. If, you are using Podman instead of Docker, you will also need to pass in --driver=podman, as covered in the Minikube documentation.

Once Minikube has started, we need to enable its registry since that is where we will push the image needed to deploy the quickstart, and where we will tell the Helm charts to download it from.

minikube addons enable registry

In order to be able to push images to the registry we need to make it accessible from outside Kubernetes. How we do this depends on your operating system. All the below examples will expose it at localhost:5000

# On Mac:
docker run --rm -it --network=host alpine ash -c "apk add socat && socat TCP-LISTEN:5000,reuseaddr,fork TCP:$(minikube ip):5000"

# On Linux:
kubectl port-forward --namespace kube-system service/registry 5000:80 &

# On Windows:
kubectl port-forward --namespace kube-system service/registry 5000:80
docker run --rm -it --network=host alpine ash -c "apk add socat && socat TCP-LISTEN:5000,reuseaddr,fork TCP:host.docker.internal:5000"

Prerequisites

  • Helm must be installed to deploy the backend on Kubernetes.

Once you have installed Helm, you need to add the repository that provides Helm Charts for WildFly.

$ helm repo add wildfly https://docs.wildfly.org/wildfly-charts/
"wildfly" has been added to your repositories
$ helm search repo wildfly
NAME                    CHART VERSION   APP VERSION     DESCRIPTION
wildfly/wildfly         ...             ...            Build and Deploy WildFly applications on OpenShift
wildfly/wildfly-common  ...             ...            A library chart for WildFly-based applications

Deploy the WildFly Source-to-Image (S2I) Quickstart to Kubernetes with Helm Charts

The backend will be built and deployed on Kubernetes with a Helm Chart for WildFly.

Navigate to the root directory of this quickstart and run the following commands:

mvn -Popenshift package wildfly:image

This will use the openshift Maven profile we saw earlier to build the application, and create a Docker image containing the WildFly server with the application deployed. The name of the image will be microprofile-lra.

Next we need to tag the image and make it available to Kubernetes. You can push it to a registry like quay.io. In this case we tag as localhost:5000/microprofile-lra:latest and push it to the internal registry in our Kubernetes instance:

# Tag the image
docker tag microprofile-lra localhost:5000/microprofile-lra:latest
# Push the image to the registry
docker push localhost:5000/microprofile-lra:latest

In the below call to helm install which deploys our application to Kubernetes, we are passing in some extra arguments to tweak the Helm build:

  • --set build.enabled=false - This turns off the s2i build for the Helm chart since Kubernetes, unlike OpenShift, does not have s2i. Instead, we are providing the image to use.

  • --set deploy.route.enabled=false - This disables route creation normally performed by the Helm chart. On Kubernetes we will use port-forwards instead to access our application, since routes are an OpenShift specific concept and thus not available on Kubernetes.

  • --set image.name="localhost:5000/microprofile-lra" - This tells the Helm chart to use the image we built, tagged and pushed to Kubernetes' internal registry above.

$ helm install microprofile-lra -f charts/helm.yaml wildfly/wildfly --wait --timeout=10m0s --set build.enabled=false --set deploy.route.enabled=false --set image.name="localhost:5000/microprofile-lra"
NAME: microprofile-lra
...
STATUS: deployed
REVISION: 1

This command will return once the application has successfully deployed. In case of a timeout, you can check the status of the application with the following command in another terminal:

kubectl get deployment microprofile-lra

The Helm Chart for this quickstart contains all the information to build an image from the source code using S2I on Java 17:

build:
  uri: https://github.com/wildfly/quickstart.git
  ref: 33.0.0.Final
  contextDir: microprofile-lra
deploy:
  replicas: 1
  route:
    tls:
      enabled: false

This will create a new deployment on Kubernetes and deploy the application.

If you want to see all the configuration elements to customize your deployment you can use the following command:

$ helm show readme wildfly/wildfly

To be able to connect to our application running in Kubernetes from outside, we need to set up a port-forward to the microprofile-lra service created for us by the Helm chart.

This service will run on port 8080, and we set up the port forward to also run on port 8080:

kubectl port-forward service/microprofile-lra 8080:8080

The server can now be accessed via http://localhost:8080 from outside Kubernetes. Note that the command to create the port-forward will not return, so it is easiest to run this in a separate terminal.

Note

The Maven profile named openshift is used by the Helm chart to provision the server with the quickstart deployed on the root web context, and thus the application should be accessed with the URL without the /microprofile-lra path segment after HOST:PORT.

Run the Integration Tests with Kubernetes

The integration tests included with this quickstart, which verify that the quickstart runs correctly, may also be run with the quickstart running on Kubernetes.

Note

The integration tests expect a deployed application, so make sure you have deployed the quickstart on Kubernetes before you begin.

Run the integration tests using the following command to run the verify goal with the integration-testing profile activated and the proper URL:

$ mvn verify -Pintegration-testing -Dserver.host=http://localhost:8080 

Undeploy the WildFly Source-to-Image (S2I) Quickstart from Kubernetes with Helm Charts

$ helm uninstall microprofile-lra

To stop the port forward you created earlier use:

$ kubectl port-forward service/microprofile-lra 8080:8080

Creating the Maven Project

mvn archetype:generate \
    -DgroupId=org.wildfly.quickstarts \
    -DartifactId=microprofile-lra \
    -DinteractiveMode=false \
    -DarchetypeGroupId=org.apache.maven.archetypes \
    -DarchetypeArtifactId=maven-archetype-webapp
cd microprofile-lra

Open the project in your favourite IDE.

Open the generated pom.xml.

The first thing to do is to change the minimum JDK to Java 11 and set the other relevant version properties:

<maven.compiler.source>11</maven.compiler.source>
<maven.compiler.target>11</maven.compiler.target>

<!-- the version for the Server -->
<version.server>33.0.0.Final</version.server>
<!-- The versions for the BOMs, Packs and Plugins -->
<version.bom.ee>33.0.0.Final</version.bom.ee>
<version.bom.microprofile>33.0.0.Final</version.bom.microprofile>
<version.plugin.wildfly>5.0.0.Final</version.plugin.wildfly>
<version.plugin.wildfly-jar></version.plugin.wildfly-jar>

Next we need to setup our dependencies. Add the following section to your pom.xml:

<dependencyManagement>
    <dependencies>
        <dependency>
            <groupId>org.wildfly.bom</groupId>
            <artifactId>wildfly-ee-with-tools</artifactId>
            <version>${version.bom.ee}</version>
            <type>pom</type>
            <scope>import</scope>
        </dependency>
        <dependency>
            <groupId>org.wildfly.bom</groupId>
            <artifactId>wildfly-microprofile</artifactId>
            <version>${version.bom.microprofile}</version>
            <type>pom</type>
            <scope>import</scope>
        </dependency>
    </dependencies>
</dependencyManagement>

Now we need to add the following dependencies:

<dependency>
    <groupId>org.eclipse.microprofile.lra</groupId>
    <artifactId>microprofile-lra-api</artifactId>
    <scope>provided</scope>
</dependency>
<dependency>
    <groupId>jakarta.ws.rs</groupId>
    <artifactId>jakarta.ws.rs-api</artifactId>
    <scope>provided</scope>
</dependency>
<dependency>
    <groupId>jakarta.enterprise</groupId>
    <artifactId>jakarta.enterprise.cdi-api</artifactId>
    <scope>provided</scope>
</dependency>
<dependency>
    <groupId>org.jboss.logging</groupId>
    <artifactId>jboss-logging</artifactId>
    <scope>provided</scope>
</dependency>
Note
We need Jakarta REST (JAX-RS) since LRA exposes functionality over JAX-RS resources and uses HTTP as its communication protocol.

All dependencies can have provided scope. The versions are taken from the above defined BOM.

As we are going to be deploying this application to the WildFly server, let’s also add a maven plugin that will simplify the deployment operations (you can replace the generated build section):

<build>
    <!-- Set the name of the archive -->
    <finalName>${project.artifactId}</finalName>
    <pluginManagement>
        <plugins>
            <plugin>
                <groupId>org.wildfly.plugins</groupId>
                <artifactId>wildfly-maven-plugin</artifactId>
                <version>${version.plugin.wildfly}</version>
            </plugin>
            <plugin>
                <groupId>org.wildfly.plugins</groupId>
                <artifactId>wildfly-jar-maven-plugin</artifactId>
                <version>${version.plugin.wildfly-jar}</version>
            </plugin>
        </plugins>
    </pluginManagement>
</build>

Setup the required Maven repositories (if you don’t have them set up in Maven global settings):

<repositories>
    <repository>
        <id>jboss-public-maven-repository</id>
        <name>JBoss Public Maven Repository</name>
        <url>https://repository.jboss.org/nexus/content/groups/public</url>
        <layout>default</layout>
        <releases>
            <enabled>true</enabled>
            <updatePolicy>never</updatePolicy>
        </releases>
        <snapshots>
            <enabled>true</enabled>
            <updatePolicy>never</updatePolicy>
        </snapshots>
    </repository>
    <repository>
        <id>redhat-ga-maven-repository</id>
        <name>Red Hat GA Maven Repository</name>
        <url>https://maven.repository.redhat.com/ga/</url>
        <layout>default</layout>
        <releases>
            <enabled>true</enabled>
            <updatePolicy>never</updatePolicy>
        </releases>
        <snapshots>
            <enabled>true</enabled>
            <updatePolicy>never</updatePolicy>
        </snapshots>
    </repository>
</repositories>
<pluginRepositories>
    <pluginRepository>
        <id>jboss-public-maven-repository</id>
        <name>JBoss Public Maven Repository</name>
        <url>https://repository.jboss.org/nexus/content/groups/public</url>
        <releases>
            <enabled>true</enabled>
        </releases>
        <snapshots>
            <enabled>true</enabled>
        </snapshots>
    </pluginRepository>
    <pluginRepository>
        <id>redhat-ga-maven-repository</id>
        <name>Red Hat GA Maven Repository</name>
        <url>https://maven.repository.redhat.com/ga/</url>
        <releases>
            <enabled>true</enabled>
        </releases>
        <snapshots>
            <enabled>true</enabled>
        </snapshots>
    </pluginRepository>
</pluginRepositories>

Now we are ready to start working with MicroProfile LRA.

Set up JAX-RS server and result wrapper

LRA works on top of JAX-RS. To set up JAX-RS server in our service, we need to create a new application class org.wildfly.quickstarts.microprofile.lra.JaxRsApplication in the file microprofile-lra/src/main/java/org/wildfly/quickstarts/microprofile/lra/JaxRsApplication.java that looks like this:

package org.wildfly.quickstarts.microprofile.lra;

import jakarta.ws.rs.ApplicationPath;
import jakarta.ws.rs.core.Application;

@ApplicationPath("/")
public class JaxRsApplication extends Application {
}

Now we can declare our LRA JAX-RS resources.

The LRAs we’re going to create also accumulate results in a wrapper called ParticipantResult which we can create in org.wildfly.quickstarts.microprofile.lra.ParticipantResult class:

package org.wildfly.quickstarts.microprofile.lra;

public class ParticipantResult {

    private String workLRAId;
    private String workRecoveryId;
    private String completeLRAId;
    private String completeRecoveryId;
    private String compensateLRAId;
    private String compensateRecoveryId;

    public ParticipantResult() {}

    public ParticipantResult(String workLRAId, String workRecoveryId,
                       String completeLRAId, String completeRecoveryId,
                       String compensateLRAId, String compensateRecoveryId) {
        this.workLRAId = workLRAId;
        this.workRecoveryId =  workRecoveryId;
        this.completeLRAId = completeLRAId;
        this.completeRecoveryId = completeRecoveryId;
        this.compensateLRAId = compensateLRAId;
        this.compensateRecoveryId = compensateRecoveryId;
    }

    public String getWorkLRAId() {
        return workLRAId;
    }

    public void setWorkLRAId(String workLRAId) {
        this.workLRAId = workLRAId;
    }

    public String getWorkRecoveryId() {
        return workRecoveryId;
    }

    public void setWorkRecoveryId(String workRecoveryId) {
        this.workRecoveryId = workRecoveryId;
    }

    public String getCompleteLRAId() {
        return completeLRAId;
    }

    public void setCompleteLRAId(String completeLRAId) {
        this.completeLRAId = completeLRAId;
    }

    public String getCompleteRecoveryId() {
        return completeRecoveryId;
    }

    public void setCompleteRecoveryId(String completeRecoveryId) {
        this.completeRecoveryId = completeRecoveryId;
    }

    public String getCompensateLRAId() {
        return compensateLRAId;
    }

    public void setCompensateLRAId(String compensateLRAId) {
        this.compensateLRAId = compensateLRAId;
    }

    public String getCompensateRecoveryId() {
        return compensateRecoveryId;
    }

    public void setCompensateRecoveryId(String compensateRecoveryId) {
        this.compensateRecoveryId = compensateRecoveryId;
    }

    @Override
    public String toString() {
        return "ParticipantResult{" +
            "workLRAId='" + workLRAId + '\'' +
            ", workRecoveryId='" + workRecoveryId + '\'' +
            ", completeLRAId='" + completeLRAId + '\'' +
            ", completeRecoveryId='" + completeRecoveryId + '\'' +
            ", compensateLRAId='" + compensateLRAId + '\'' +
            ", compensateRecoveryId='" + compensateRecoveryId + '\'' +
            '}';
    }
}

Creating LRA participants

In LRA, we define LRA execution and participation with the same @LRA annotation. If placed on a method, it acts similarly to @Transactional annotation from JTA. By default, it uses the REQUIRED LRA type meaning new LRA is started or existing LRA (if passed to the invocation) is joined before the method is started. The LRA is also closed (success) or cancelled (failure/exception) at the end of the method.

LRA currently works on top of the JAX-RS resources. We can place @LRA annotation on any JAX-RS method and the LRA is already managed for us by WildFly. Let’s create a simple JAX-RS resource that uses lra in org.wildfly .quickstarts.microprofile.lra.LRAParticipant1:

/*
 * JBoss, Home of Professional Open Source.
 * Copyright 2023, Red Hat, Inc., and individual contributors
 * as indicated by the @author tags. See the copyright.txt file in the
 * distribution for a full listing of individual contributors.
 *
 * This is free software; you can redistribute it and/or modify it
 * under the terms of the GNU Lesser General Public License as
 * published by the Free Software Foundation; either version 2.1 of
 * the License, or (at your option) any later version.
 *
 * This software is distributed in the hope that it will be useful,
 * but WITHOUT ANY WARRANTY; without even the implied warranty of
 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
 * Lesser General Public License for more details.
 *
 * You should have received a copy of the GNU Lesser General Public
 * License along with this software; if not, write to the Free
 * Software Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA
 * 02110-1301 USA, or see the FSF site: http://www.fsf.org.
 */

package org.wildfly.quickstarts.microprofile.lra;

import jakarta.enterprise.context.ApplicationScoped;
import jakarta.ws.rs.GET;
import jakarta.ws.rs.HeaderParam;
import jakarta.ws.rs.PUT;
import jakarta.ws.rs.Path;
import jakarta.ws.rs.Produces;
import jakarta.ws.rs.QueryParam;
import jakarta.ws.rs.client.Client;
import jakarta.ws.rs.client.ClientBuilder;
import jakarta.ws.rs.core.Context;
import jakarta.ws.rs.core.MediaType;
import jakarta.ws.rs.core.Response;
import jakarta.ws.rs.core.UriInfo;
import org.eclipse.microprofile.lra.annotation.Compensate;
import org.eclipse.microprofile.lra.annotation.Complete;
import org.eclipse.microprofile.lra.annotation.ws.rs.LRA;
import org.jboss.logging.Logger;

import java.net.URI;

@Path("/participant1")
@ApplicationScoped
public class LRAParticipant1 {

    private static final Logger LOGGER = Logger.getLogger(LRAParticipant1.class);

    private String workLRAId;
    private String workRecoveryId;
    private String completeLRAId;
    private String completeRecoveryId;
    private String compensateLRAId;
    private String compensateRecoveryId;

    @Context
    UriInfo uriInfo;

    @LRA
    @GET
    @Path("/work")
    public Response work(@HeaderParam(LRA.LRA_HTTP_CONTEXT_HEADER) URI lraId,
                         @HeaderParam(LRA.LRA_HTTP_RECOVERY_HEADER) URI participantId,
                         @QueryParam("failLRA") boolean failLRA) {
        LOGGER.infof("Executing action of Participant 1 enlisted in LRA %s " +
            "that was assigned %s participant Id.", lraId, participantId);

        workLRAId = lraId.toASCIIString();
        workRecoveryId = participantId.toASCIIString();
        compensateLRAId = null;
        compensateRecoveryId = null;
        completeLRAId = null;
        completeRecoveryId = null;

        return failLRA ? Response.status(Response.Status.INTERNAL_SERVER_ERROR).entity(lraId.toASCIIString()).build() :
            Response.ok(lraId.toASCIIString()).build();
    }

    @Compensate
    @PUT
    @Path("/compensate")
    public Response compensateWork(@HeaderParam(LRA.LRA_HTTP_CONTEXT_HEADER) URI lraId,
                                   @HeaderParam(LRA.LRA_HTTP_RECOVERY_HEADER) URI participantId) {
        LOGGER.infof("Compensating action for Participant 1 (%s) in LRA %s.", participantId, lraId);

        compensateLRAId = lraId.toASCIIString();
        compensateRecoveryId = participantId.toASCIIString();

        return Response.ok().build();
    }

    @Complete
    @PUT
    @Path("/complete")
    public Response completeWork(@HeaderParam(LRA.LRA_HTTP_CONTEXT_HEADER) URI lraId,
                                 @HeaderParam(LRA.LRA_HTTP_RECOVERY_HEADER) URI participantId) {
        LOGGER.infof("Complete action for Participant 1 (%s) in LRA %s.", participantId, lraId);

        completeLRAId = lraId.toASCIIString();
        completeRecoveryId = participantId.toASCIIString();

        return Response.ok().build();
    }

    @GET
    @Path("/result")
    @Produces(MediaType.APPLICATION_JSON)
    public ParticipantResult getParticipantResult() {
        return new ParticipantResult(workLRAId, workRecoveryId,
            completeLRAId, completeRecoveryId,
            compensateLRAId, compensateRecoveryId);
    }
}

Let’s look at it part by part.

The most important method is called work and it looks like this:

@LRA
@GET
@Path("/work")
public Response work(@HeaderParam(LRA.LRA_HTTP_CONTEXT_HEADER) URI lraId,
                     @HeaderParam(LRA.LRA_HTTP_RECOVERY_HEADER) URI participantId,
                     @QueryParam("failLRA") boolean failLRA) {
    LOGGER.infof("Executing action of Participant 1 enlisted in LRA %s " +
        "that was assigned %s participant Id.", lraId, participantId);

    workLRAId = lraId.toASCIIString();
    workRecoveryId = participantId.toASCIIString();
    compensateLRAId = null;
    compensateRecoveryId = null;
    completeLRAId = null;
    completeRecoveryId = null;

    return failLRA ? Response.status(Response.Status.INTERNAL_SERVER_ERROR).entity(lraId.toASCIIString()).build() :
        Response.ok(lraId.toASCIIString()).build();
}

In this GET JAX-RS method, we also use the @LRA annotation that either starts a new LRA or joins an existing one which is defined by the default LRA type REQUIRED. This is decided based on the LRA.LRA_HTTP_CONTEXT_HEADER header we called lraId. If the framework starts a new LRA, this header is automatically populated with its ID. If the caller specifies this LRA.LRA_HTTP_CONTEXT_HEADER manually in the request, the received LRA is joined. As you can see, the LRA context or ID is propagated in HTTP headers.

The second header parameter LRA.LRA_HTTP_RECOVERY_HEADER is considered a unique participant ID for a particular enlistment within LRA. If we would like to enlist LRAParticipant1 in the same LRA (LRA.LRA_HTTP_CONTEXT_HEADER) multiple times, this recovery ID would be different so we can associate the invocations of compensate and complete methods.

Each LRA participant needs to define the @Compensate method that defines the compensating action.

@Compensate
@PUT
@Path("/compensate")
public Response compensateWork(@HeaderParam(LRA.LRA_HTTP_CONTEXT_HEADER) URI lraId,
                               @HeaderParam(LRA.LRA_HTTP_RECOVERY_HEADER) URI participantId) {
    LOGGER.infof("Compensating action for Participant 1 (%s) in LRA %s.", participantId, lraId);

    compensateLRAId = lraId.toASCIIString();
    compensateRecoveryId = participantId.toASCIIString();

    return Response.ok().build();
}

The compensation is defined by the @Compensate annotation which needs to be placed on the JAX-RS PUT method so the LRA coordinator knows how to call it. For simplicity, we are just printing the messages to the console. The participant can control how it finishes its participation in LRA via the returned status code. Please see the specification for more details.

The complete method looks similarly. It uses the @Complete annotation and it also needs to be the JAX-RS PUT method.

@Complete
@PUT
@Path("/complete")
public Response completeWork(@HeaderParam(LRA.LRA_HTTP_CONTEXT_HEADER) URI lraId,
                             @HeaderParam(LRA.LRA_HTTP_RECOVERY_HEADER) URI participantId) {
    LOGGER.infof("Complete action for Participant 1 (%s) in LRA %s.", participantId, lraId);

    completeLRAId = lraId.toASCIIString();
    completeRecoveryId = participantId.toASCIIString();

    return Response.ok().build();
}

The LRA coordinator invokes the @Compensate method when the LRA cancels on failure and it invokes the @Complete method when the LRA closes successfully.

Note
The @Complete and @Compensate methods don’t need to be JAX-RS methods. See the specification for details.

Now we are already able to start our first LRA. You can deploy the application to the WildFly as demonstrated in the Solution section. Remember that you need to enable the LRA extensions and subsystems with the enable-microprofile-lra.cli script.

Then you can invoke the LRAParticipant1 JAX-RS resource as:

$ curl http://localhost:8080/microprofile-lra/participant1/work

or if you want to simulate LRA failure as:

$ curl "http://localhost:8080/microprofile-lra/participant1/work?failLRA=true"

In either case, you will see the LRA execution message printed in the WildFly console:

INFO  [org.wildfly.quickstarts.microprofile.lra.LRAParticipant1] (default task-1) Executing action of Participant 1 enlisted in LRA http://localhost:8080/lra-coordinator/lra-coordinator/0_ffff0aca949a_-4998614b_64e74427_48 that was assigned http://localhost:8080/lra-coordinator/lra-coordinator/recoveryhttp%3A%2F%2Flocalhost%3A8080%2Flra-coordinator%2Flra-coordinator%2F0_ffff0aca949a_-4998614b_64e74427_48/0_ffff0aca949a_-4998614b_64e74427_4a participant Id.

And either the complete or compensate message depending on the failLRA paramater that can fail the LRA causing it to cancel:

INFO  [org.wildfly.quickstarts.microprofile.lra.LRAParticipant1] (default task-4) Complete action for Participant 1 (http://localhost:8080/lra-coordinator/lra-coordinator/recoveryhttp%3A%2F%2Flocalhost%3A8080%2Flra-coordinator%2Flra-coordinator%2F0_ffff0aca949a_-4998614b_64e74427_37/0_ffff0aca949a_-4998614b_64e74427_39) in LRA http://localhost:8080/lra-coordinator/lra-coordinator/0_ffff0aca949a_-4998614b_64e74427_37.


INFO  [org.wildfly.quickstarts.microprofile.lra.LRAParticipant1] (default task-4) Compensating action for Participant 1 (http://localhost:8080/lra-coordinator/lra-coordinator/recoveryhttp%3A%2F%2Flocalhost%3A8080%2Flra-coordinator%2Flra-coordinator%2F0_ffff0aca949a_-4998614b_64e74427_48/0_ffff0aca949a_-4998614b_64e74427_4a) in LRA http://localhost:8080/lra-coordinator/lra-coordinator/0_ffff0aca949a_-4998614b_64e74427_48.

Multiple participants in the LRA

One participant that starts and ends the LRA is probably enough to demonstrate the functionality, but it rarely makes sense in distributed microservices architecture to only have one service that participates in a distributed transaction. So let’s add another participant into the LRA started in the LRAParticipant1.

Copy the LRAParticipant1 into a new class LRAParticipant2 and change all references to participant1 to participant2. The full class is provided for convenience also here:

/*
 * JBoss, Home of Professional Open Source.
 * Copyright 2023, Red Hat, Inc., and individual contributors
 * as indicated by the @author tags. See the copyright.txt file in the
 * distribution for a full listing of individual contributors.
 *
 * This is free software; you can redistribute it and/or modify it
 * under the terms of the GNU Lesser General Public License as
 * published by the Free Software Foundation; either version 2.1 of
 * the License, or (at your option) any later version.
 *
 * This software is distributed in the hope that it will be useful,
 * but WITHOUT ANY WARRANTY; without even the implied warranty of
 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
 * Lesser General Public License for more details.
 *
 * You should have received a copy of the GNU Lesser General Public
 * License along with this software; if not, write to the Free
 * Software Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA
 * 02110-1301 USA, or see the FSF site: http://www.fsf.org.
 */

package org.wildfly.quickstarts.microprofile.lra;

import jakarta.enterprise.context.ApplicationScoped;
import jakarta.ws.rs.GET;
import jakarta.ws.rs.HeaderParam;
import jakarta.ws.rs.PUT;
import jakarta.ws.rs.Path;
import jakarta.ws.rs.Produces;
import jakarta.ws.rs.core.MediaType;
import jakarta.ws.rs.core.Response;
import org.eclipse.microprofile.lra.annotation.Compensate;
import org.eclipse.microprofile.lra.annotation.Complete;
import org.eclipse.microprofile.lra.annotation.ws.rs.LRA;
import org.jboss.logging.Logger;

import java.net.URI;

@Path("/participant2")
@ApplicationScoped
public class LRAParticipant2 {

    private static final Logger LOGGER = Logger.getLogger(LRAParticipant2.class);

    private String workLRAId;
    private String workRecoveryId;
    private String completeLRAId;
    private String completeRecoveryId;
    private String compensateLRAId;
    private String compensateRecoveryId;

    @LRA(end = false)
    @GET
    @Path("/work")
    public Response work(@HeaderParam(LRA.LRA_HTTP_CONTEXT_HEADER) URI lraId,
                         @HeaderParam(LRA.LRA_HTTP_RECOVERY_HEADER) URI participantId) {
        LOGGER.infof("Executing action of Participant 2 enlisted in LRA %s " +
            "that was assigned %s participant Id.", lraId, participantId);

        workLRAId = lraId.toASCIIString();
        workRecoveryId = participantId.toASCIIString();
        compensateLRAId = null;
        compensateRecoveryId = null;
        completeLRAId = null;
        completeRecoveryId = null;

        return Response.ok().build();
    }

    @Compensate
    @PUT
    @Path("/compensate")
    public Response compensateWork(@HeaderParam(LRA.LRA_HTTP_CONTEXT_HEADER) URI lraId,
                                   @HeaderParam(LRA.LRA_HTTP_RECOVERY_HEADER) URI participantId) {
        LOGGER.infof("Compensating action for Participant 2 (%s) in LRA %s.", participantId, lraId);

        compensateLRAId = lraId.toASCIIString();
        compensateRecoveryId = participantId.toASCIIString();

        return Response.ok().build();
    }

    @Complete
    @PUT
    @Path("/complete")
    public Response completeWork(@HeaderParam(LRA.LRA_HTTP_CONTEXT_HEADER) URI lraId,
                                 @HeaderParam(LRA.LRA_HTTP_RECOVERY_HEADER) URI participantId) {
        LOGGER.infof("Complete action for Participant 2 (%s) in LRA %s.", participantId, lraId);

        completeLRAId = lraId.toASCIIString();
        completeRecoveryId = participantId.toASCIIString();

        return Response.ok().build();
    }

    @GET
    @Path("/result")
    @Produces(MediaType.APPLICATION_JSON)
    public ParticipantResult getParticipantResult() {
        return new ParticipantResult(workLRAId, workRecoveryId,
            completeLRAId, completeRecoveryId,
            compensateLRAId, compensateRecoveryId);
    }

}

The only notable change is the LRA annotation that now contains the @LRA(end = false). This parameter states that the LRA should not be ended when this business method ends. If we ended the LRA here, it would still invoke compensate or complete callbacks on all enlisted participants (including LRAParticipant1 which will propagate the LRA into this class soon). However, it would also try to close/cancel LRA at the end of the LRAParticipant1#work method and by this time the LRA would already be ended. This would be reported by the coordinator.

We also need to add the call to the newly created JAX-RS resource to the LRAParticipant1#work method as showed in this snipped:

@LRA
@GET
@Path("/work")
public Response work(@HeaderParam(LRA.LRA_HTTP_CONTEXT_HEADER) URI lraId,
                     @HeaderParam(LRA.LRA_HTTP_RECOVERY_HEADER) URI participantId,
                     @QueryParam("failLRA") boolean failLRA) {
    LOGGER.infof("Executing action of Participant 1 enlisted in LRA %s " +
        "that was assigned %s participant Id.", lraId, participantId);

    workLRAId = lraId.toASCIIString();
    workRecoveryId = participantId.toASCIIString();
    compensateLRAId = null;
    compensateRecoveryId = null;
    completeLRAId = null;
    completeRecoveryId = null;

    // call Participant 2 to propagate the LRA
    try (Client client = ClientBuilder.newClient()) {
        client.target(uriInfo.getBaseUri() + "/participant2/work")
            .request().get();
    }

    return failLRA ? Response.status(Response.Status.INTERNAL_SERVER_ERROR).entity(lraId.toASCIIString()).build() :
        Response.ok(lraId.toASCIIString()).build();
}

You might remember that we need to propagate the LRA id (LRA context) in the LRA.LRA_HTTP_CONTEXT_HEADER. However, if we make the outgoing JAX-RS call in the JAX-RS method that already carries an active LRA context, the context is automatically added to the outgoing call. So we don’t need to pass it manually to each outgoing call.

Now we are ready to propagate LRA started in Participant 1 to the Participant 2, enlist both in the newly started LRA, and finish the LRA when the Participant 1 ends its work method.

Redeploy the application into the WildFly as showed in Solution. Then you can repeat the calls to the LRAParticipant1 JAX-RS resource as we used them previously:

$ curl http://localhost:8080/microprofile-lra/participant1/work

or if you want to simulate LRA failure as:

$ curl "http://localhost:8080/microprofile-lra/participant1/work?failLRA=true"

But this time, you will see the LRA is propagated to the LRAParticipant2 and its (complete or compensate) callbacks are invoked by the LRA coordinator in the same way as for LRAParticipant1:

INFO  [org.wildfly.quickstarts.microprofile.lra.LRAParticipant1] (default task-1) Executing action of Participant 1 enlisted in LRA http://localhost:8080/lra-coordinator/lra-coordinator/0_ffff0aca949a_-4998614b_64e74427_38b that was assigned http://localhost:8080/lra-coordinator/lra-coordinator/recoveryhttp%3A%2F%2Flocalhost%3A8080%2Flra-coordinator%2Flra-coordinator%2F0_ffff0aca949a_-4998614b_64e74427_38b/0_ffff0aca949a_-4998614b_64e74427_38d participant Id.

INFO  [org.wildfly.quickstarts.microprofile.lra.LRAParticipant2] (default task-2) Executing action of Participant 2 enlisted in LRA http://localhost:8080/lra-coordinator/lra-coordinator/0_ffff0aca949a_-4998614b_64e74427_38b that was assigned http://localhost:8080/lra-coordinator/lra-coordinator/recoveryhttp%3A%2F%2Flocalhost%3A8080%2Flra-coordinator%2Flra-coordinator%2F0_ffff0aca949a_-4998614b_64e74427_38b/0_ffff0aca949a_-4998614b_64e74427_38f participant Id.

INFO  [org.wildfly.quickstarts.microprofile.lra.LRAParticipant1] (default task-5) Compensating action for Participant 1 (http://localhost:8080/lra-coordinator/lra-coordinator/recoveryhttp%3A%2F%2Flocalhost%3A8080%2Flra-coordinator%2Flra-coordinator%2F0_ffff0aca949a_-4998614b_64e74427_38b/0_ffff0aca949a_-4998614b_64e74427_38d) in LRA http://localhost:8080/lra-coordinator/lra-coordinator/0_ffff0aca949a_-4998614b_64e74427_38b.

INFO  [org.wildfly.quickstarts.microprofile.lra.LRAParticipant2] (default task-5) Compensating action for Participant 2 (http://localhost:8080/lra-coordinator/lra-coordinator/recoveryhttp%3A%2F%2Flocalhost%3A8080%2Flra-coordinator%2Flra-coordinator%2F0_ffff0aca949a_-4998614b_64e74427_38b/0_ffff0aca949a_-4998614b_64e74427_38f) in LRA http://localhost:8080/lra-coordinator/lra-coordinator/0_ffff0aca949a_-4998614b_64e74427_38b.

Conclusion

MicroProfile LRA provides a simple API for the distributed transactions based on the saga pattern. To use it on WildFly we need to enable the appropriate extensions and subsystems for the LRA Coordinator (a service that manages LRAs) and the LRA participant (client API). The LRAs are controlled through annotations provided by the specification.

Congratulations! You have reached the end of this tutorial. You can find more information about the MicroProfile LRA in the specification github repository.