The microprofile-fault-tolerance quickstart demonstrates how to use Eclipse MicroProfile Fault Tolerance in WildFly.

One of the challenges brought by the distributed nature of microservices is that communication with external systems is inherently unreliable. This increases demand on resiliency of applications. To simplify making more resilient applications, WildFly contains an implementation of the MicroProfile Fault Tolerance specification.

In this guide, we demonstrate usage of MicroProfile Fault Tolerance annotations such as @Timeout, @Fallback, @Retry and @CircuitBreaker. The specification also introduces @Bulkhead and @Asynchronous interceptor bindings not covered in this guide.

Scenario

The application built in this guide simulates a simple backend for a gourmet coffee on-line store. It implements a REST endpoint providing information about coffee samples we have in store.

Let’s imagine, although it’s not implemented as such, that some methods in our endpoint require communication to external services like a database or an external microservice, which introduces a factor of unreliability. This is simulated in our code by intentionally throwing exceptions with certain probability. Then we use the MicroProfile Fault Tolerance annotations to overcome these failures.

Solution

We recommend that you follow the instructions that create the application from scratch. However, you can also deploy the completed example which is available in this directory.

System Requirements

The application this project produces is designed to be run on WildFly Application Server 33 or later.

All you need to build this project is Java 11.0 (Java SDK 11) or later and Maven 3.6.0 or later. See Configure Maven to Build and Deploy the Quickstarts to make sure you are configured correctly for testing the quickstarts.

Use of the WILDFLY_HOME and QUICKSTART_HOME Variables

In the following instructions, replace WILDFLY_HOME with the actual path to your WildFly installation. The installation path is described in detail here: Use of WILDFLY_HOME and JBOSS_HOME Variables.

When you see the replaceable variable QUICKSTART_HOME, replace it with the path to the root directory of all of the quickstarts.

Start the WildFly Standalone Server

  1. Open a terminal and navigate to the root of the WildFly directory.

  2. Start the WildFly server with the MicroProfile profile by typing the following command.

    $ WILDFLY_HOME/bin/standalone.sh -c standalone-microprofile.xml
    Note
    For Windows, use the WILDFLY_HOME\bin\standalone.bat script.

Creating an Application from Scratch

In this section we will go through the steps to create a new JAX-RS deployment from scratch and then make it more resilient by using MicroProfile Fault Tolerance annotations.

Project Generation

First, we need to generate a maven project. Open a terminal and create an empty maven project with following command:

mvn archetype:generate \
    -DgroupId=org.wildfly.quickstarts.microprofile.faulttolerance \
    -DartifactId=microprofile-fault-tolerance \
    -DarchetypeGroupId=org.apache.maven.archetypes \
    -DarchetypeArtifactId=maven-archetype-webapp \
    -DinteractiveMode=false
cd microprofile-fault-tolerance

Now, open the project in your favorite IDE.

pom.xml Updates

Next the project’s pom.xml should be updated so the dependencies required by this quickstart are available and so we have a plug-in installed which can deploy the quickstart directly to WildFly.

Add the following properties to the pom.xml:

<version.bom.microprofile>33.0.0.Final</version.bom.microprofile>
<version.bom.ee>33.0.0.Final</version.bom.ee>

Also the project can be updated to use Java 8 as the minimum:

<maven.compiler.source>1.8</maven.compiler.source>
<maven.compiler.target>1.8</maven.compiler.target>

Before the dependencies are defined add the following boms:

<dependencyManagement>
    <dependencies>
        <!-- importing the ee-with-tools BOM adds specs and other useful artifacts as managed dependencies -->
        <dependency>
            <groupId>org.wildfly.bom</groupId>
            <artifactId>wildfly-ee-with-tools</artifactId>
            <version>33.0.0.Final</version>
            <type>pom</type>
            <scope>import</scope>
        </dependency>
        <!-- importing the microprofile BOM adds MicroProfile specs -->
        <dependency>
            <groupId>org.wildfly.bom</groupId>
            <artifactId>wildfly-microprofile</artifactId>
            <version>33.0.0.Final</version>
            <type>pom</type>
            <scope>import</scope>
        </dependency>
    </dependencies>
</dependencyManagement>

By using boms the majority of dependencies used within this quickstart align with the version uses by the application server.

The following dependencies can now be added to the project.

<dependencies>
    <dependency>
        <groupId>org.eclipse.microprofile.fault-tolerance</groupId>
        <artifactId>microprofile-fault-tolerance-api</artifactId>
        <scope>provided</scope>
    </dependency>
    <dependency>
        <groupId>jakarta.enterprise</groupId>
        <artifactId>jakarta.enterprise.cdi-api</artifactId>
        <scope>provided</scope>
    </dependency>
    <dependency>
        <groupId>org.jboss.resteasy</groupId>
        <artifactId>resteasy-jaxrs</artifactId>
        <scope>provided</scope>
    </dependency>
    <dependency>
        <groupId>org.jboss.logging</groupId>
        <artifactId>jboss-logging</artifactId>
        <scope>provided</scope>
    </dependency>
</dependencies>

Note that all dependencies have the scope provided.

As we are going to be deploying this application to the WildFly server, let’s also add a maven plugin that will simplify working with the application server. Add the following section under configuration:

<build>
  <plugins>
    ...
    <plugin>
      <groupId>org.wildfly.plugins</groupId>
      <artifactId>wildfly-maven-plugin</artifactId>
    </plugin>
  </plugins>
</build>

Setup the required Maven repositories (if you don’t have them set up in Maven global settings):

<repositories>
    <repository>
        <id>jboss-public-maven-repository</id>
        <name>JBoss Public Maven Repository</name>
        <url>https://repository.jboss.org/nexus/content/groups/public</url>
        <layout>default</layout>
        <releases>
            <enabled>true</enabled>
            <updatePolicy>never</updatePolicy>
        </releases>
        <snapshots>
            <enabled>true</enabled>
            <updatePolicy>never</updatePolicy>
        </snapshots>
    </repository>
    <repository>
        <id>redhat-ga-maven-repository</id>
        <name>Red Hat GA Maven Repository</name>
        <url>https://maven.repository.redhat.com/ga/</url>
        <layout>default</layout>
        <releases>
            <enabled>true</enabled>
            <updatePolicy>never</updatePolicy>
        </releases>
        <snapshots>
            <enabled>true</enabled>
            <updatePolicy>never</updatePolicy>
        </snapshots>
    </repository>
</repositories>
<pluginRepositories>
    <pluginRepository>
        <id>jboss-public-maven-repository</id>
        <name>JBoss Public Maven Repository</name>
        <url>https://repository.jboss.org/nexus/content/groups/public</url>
        <releases>
            <enabled>true</enabled>
        </releases>
        <snapshots>
            <enabled>true</enabled>
        </snapshots>
    </pluginRepository>
    <pluginRepository>
        <id>redhat-ga-maven-repository</id>
        <name>Red Hat GA Maven Repository</name>
        <url>https://maven.repository.redhat.com/ga/</url>
        <releases>
            <enabled>true</enabled>
        </releases>
        <snapshots>
            <enabled>true</enabled>
        </snapshots>
    </pluginRepository>
</pluginRepositories>

Now we are ready to start developing an application with MicroProfile Fault Tolerance capabilities.

Preparing an Application: REST Endpoint and CDI Bean

In this section we create a skeleton of our application, so that we have something that we can extend and to which we can add fault tolerance features later on.

First, create a simple entity representing a coffee sample in our store:

package org.wildfly.quickstarts.microprofile.faulttolerance;

public class Coffee {

    public Integer id;
    public String name;
    public String countryOfOrigin;
    public Integer price;

    public Coffee() {
    }

    public Coffee(Integer id, String name, String countryOfOrigin, Integer price) {
        this.id = id;
        this.name = name;
        this.countryOfOrigin = countryOfOrigin;
        this.price = price;
    }
}

Now, lets expose our JAX-RS application at the context path:

package org.wildfly.quickstarts.microprofile.faulttolerance;

import jakarta.ws.rs.ApplicationPath;
import jakarta.ws.rs.core.Application;

@ApplicationPath("/")
public class CoffeeApplication extends Application {
}

Let’s continue with a simple CDI bean, that would work as a repository of our coffee samples.

package org.wildfly.quickstarts.microprofile.faulttolerance;

import java.util.ArrayList;
import java.util.Collections;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.stream.Collectors;
import jakarta.enterprise.context.ApplicationScoped;

@ApplicationScoped
public class CoffeeRepositoryService {

    private Map<Integer, Coffee> coffeeList = new HashMap<>();

    public CoffeeRepositoryService() {
        coffeeList.put(1, new Coffee(1, "Fernandez Espresso", "Colombia", 23));
        coffeeList.put(2, new Coffee(2, "La Scala Whole Beans", "Bolivia", 18));
        coffeeList.put(3, new Coffee(3, "Dak Lak Filter", "Vietnam", 25));
    }

    public List<Coffee> getAllCoffees() {
        return new ArrayList<>(coffeeList.values());
    }

    public Coffee getCoffeeById(Integer id) {
        return coffeeList.get(id);
    }

    public List<Coffee> getRecommendations(Integer id) {
        if (id == null) {
            return Collections.emptyList();
        }
        return coffeeList.values().stream()
                .filter(coffee -> !id.equals(coffee.id))
                .limit(2)
                .collect(Collectors.toList());
    }
}

Finally, create the org.wildfly.quickstarts.microprofile.faulttolerance.CoffeeResource class as follows:

package org.wildfly.quickstarts.microprofile.faulttolerance;

import java.util.List;
import java.util.Random;
import java.util.concurrent.atomic.AtomicLong;
import jakarta.inject.Inject;
import jakarta.ws.rs.GET;
import jakarta.ws.rs.Path;
import jakarta.ws.rs.Produces;
import jakarta.ws.rs.core.MediaType;

import org.jboss.logging.Logger;

@Path("/coffee")
@Produces(MediaType.APPLICATION_JSON)
public class CoffeeResource {

    private static final Logger LOGGER = Logger.getLogger(CoffeeResource.class);

    @Inject
    private CoffeeRepositoryService coffeeRepository;

    private AtomicLong counter = new AtomicLong(0);

    @GET
    public List<Coffee> coffees() {
        final Long invocationNumber = counter.getAndIncrement();

        maybeFail(String.format("CoffeeResource#coffees() invocation #%d failed", invocationNumber));

        LOGGER.infof("CoffeeResource#coffees() invocation #%d returning successfully", invocationNumber);
        return coffeeRepository.getAllCoffees();
    }

    private void maybeFail(String failureLogMessage) {
        if (new Random().nextBoolean()) {
            LOGGER.error(failureLogMessage);
            throw new RuntimeException("Resource failure.");
        }
    }
}

At this point, we expose a single REST method that will show a list of coffee samples in a JSON format. Note that we introduced some fault making code in our CoffeeResource#maybeFail() method, which is going to cause failures in the CoffeeResource#coffees() endpoint method in about 50% of requests.

Build and Deploy the Initial Application

Let’s check that our application works!

  1. Make sure the WildFly server is started as described above.

  2. Open new terminal and navigate to the root directory of your project.

  3. Type the following command to build and deploy the project:

    mvn clean package wildfly:deploy

Then, open http://localhost:8080/microprofile-fault-tolerance/coffee in your browser and make a couple of requests. Some requests should show us the list of our coffee samples in JSON, the rest will fail with a RuntimeException thrown in CoffeeResource#maybeFail().

Adding Resiliency: Retries

Let the WildFly server running and in your IDE add the @Retry annotation to the CoffeeResource#coffees() method as follows and save the file:

import org.eclipse.microprofile.faulttolerance.Retry;
...

public class CoffeeResource {
    ...
    @GET
    @Retry(maxRetries = 4)
    public List<Coffee> coffees() {
        ...
    }
    ...
}

Rebuild and redeploy the application in WildFly server:

mvn wildfly:deploy

You can reload the page couple more times. Practically all requests should now be succeeding. The CoffeeResource#coffees() method is still in fact failing in about 50% of cases, but every time it happens the platform automatically retries the call!

To see that the failures still happen, check the output of the development server. The log messages should be similar to these:

18:29:20,901 ERROR [org.wildfly.quickstarts.microprofile.faulttolerance.CoffeeResource] (default task-3) CoffeeResource#coffees() invocation #0 failed
18:29:20,901 INFO  [org.wildfly.quickstarts.microprofile.faulttolerance.CoffeeResource] (default task-3) CoffeeResource#coffees() invocation #1 returning successfully
18:29:21,315 ERROR [org.wildfly.quickstarts.microprofile.faulttolerance.CoffeeResource] (default task-3) CoffeeResource#coffees() invocation #0 failed
18:29:21,337 ERROR [org.wildfly.quickstarts.microprofile.faulttolerance.CoffeeResource] (default task-3) CoffeeResource#coffees() invocation #1 failed
18:29:21,502 ERROR [org.wildfly.quickstarts.microprofile.faulttolerance.CoffeeResource] (default task-3) CoffeeResource#coffees() invocation #2 failed
18:29:21,654 INFO  [org.wildfly.quickstarts.microprofile.faulttolerance.CoffeeResource] (default task-3) CoffeeResource#coffees() invocation #3 returning successfully

You can see that every time an invocation fails, it’s immediately followed by another invocation, until one succeeds. Since we allowed 4 retries, it would require 5 invocations to fail in a row, in order for the user to be actually exposed to a failure. That is fairly unlikely to happen.

Adding Resiliency: Timeouts

So what else have we got in MicroProfile Fault Tolerance? Let’s look into timeouts.

Add following two methods to our CoffeeResource endpoint and deploy onto the running server.

import org.jboss.resteasy.annotations.jaxrs.PathParam;
import org.eclipse.microprofile.faulttolerance.Timeout;
...
public class CoffeeResource {
    ...
    @GET
    @Path("/{id}/recommendations")
    @Timeout(250)
    public List<Coffee> recommendations(@PathParam("id") int id) {
        long started = System.currentTimeMillis();
        final long invocationNumber = counter.getAndIncrement();

        try {
            randomDelay();
            LOGGER.infof("CoffeeResource#recommendations() invocation #%d returning successfully", invocationNumber);
            return coffeeRepository.getRecommendations(id);
        } catch (InterruptedException e) {
            LOGGER.errorf("CoffeeResource#recommendations() invocation #%d timed out after %d ms",
                    invocationNumber, System.currentTimeMillis() - started);
            return null;
        }
    }

    private void randomDelay() throws InterruptedException {
        Thread.sleep(new Random().nextInt(500));
    }
}

Rebuild and redeploy the application:

mvn wildfly:deploy

We added some new functionality. We want to be able to recommend some related coffees based on a coffee that a user is currently looking at. It’s not a critical functionality, it’s a nice-to-have. When the system is overloaded, and the logic behind obtaining recommendations takes too long to execute, we would rather time out and render the UI without recommendations.

Note that the timeout was configured to 250 ms, and a random artificial delay between 0 and 500 ms was introduced into the CoffeeResource#recommendations() method.

In your browser, go to http://localhost:8080/microprofile-fault-tolerance/coffee/2/recommendations and hit reload a couple of times.

You should see some requests time out with org.eclipse.microprofile.faulttolerance.exceptions.TimeoutException. Requests that do not time out should show two recommended coffee samples in JSON.

Adding Resiliency: Fallbacks

Let’s improve the recommendations feature by providing a fallback functionality for the case when a timeout happens.

Add a fallback method to CoffeeResource and a @Fallback annotation to CoffeeResource#recommendations() method as follows:

import java.util.Collections;
import org.eclipse.microprofile.faulttolerance.Fallback;
...
public class CoffeeResource {
    ...
    @Fallback(fallbackMethod = "fallbackRecommendations")
    public List<Coffee> recommendations(@PathParam("id") int id) {
        ...
    }

    public List<Coffee> fallbackRecommendations(int id) {
        LOGGER.info("Falling back to RecommendationResource#fallbackRecommendations()");
        // safe bet, return something that everybody likes
        return Collections.singletonList(coffeeRepository.getCoffeeById(1));
    }
    ...
}

Rebuild and redeploy the application.

Hit reload several times on http://localhost:8080/microprofile-fault-tolerance/coffee/2/recommendations. The TimeoutException should not appear anymore. Instead, in case of a timeout, the page will display a single recommendation that we hardcoded in our fallback method fallbackRecommendations(), rather than two recommendations returned by the original method.

Check the server output to see that fallback is really happening:

18:36:01,873 INFO  [org.wildfly.quickstarts.microprofile.faulttolerance.CoffeeResource] (default task-3) CoffeeResource#recommendations() invocation #0 returning successfully
18:36:02,705 ERROR [org.wildfly.quickstarts.microprofile.faulttolerance.CoffeeResource] (default task-3) CoffeeResource#recommendations() invocation #0 timed out after 253 ms
18:36:02,706 INFO  [org.wildfly.quickstarts.microprofile.faulttolerance.CoffeeResource] (default task-3) Falling back to RecommendationResource#fallbackRecommendations()
Note
The fallback method is required to have the same parameters as the original method.

Adding Resiliency: Circuit Breakers

A circuit breaker is useful for limiting number of failures happening in the system, when part of the system becomes temporarily unstable. The circuit breaker records successful and failed invocations of a method, and when the ratio of failed invocations reaches the specified threshold, the circuit breaker opens and blocks all further invocations of that method for a given time.

Add the following code into the CoffeeRepositoryService bean, so that we can demonstrate a circuit breaker in action:

import java.util.concurrent.atomic.AtomicLong;
import org.eclipse.microprofile.faulttolerance.CircuitBreaker;
...

public class CoffeeRepositoryService {
    ...

    private AtomicLong counter = new AtomicLong(0);

    @CircuitBreaker(requestVolumeThreshold = 4)
    public Integer getAvailability(Coffee coffee) {
        maybeFail();
        return new Random().nextInt(30);
    }

    private void maybeFail() {
        // introduce some artificial failures
        final Long invocationNumber = counter.getAndIncrement();
        if (invocationNumber % 4 > 1) { // alternate 2 successful and 2 failing invocations
            throw new RuntimeException("Service failed.");
        }
    }
}

and inject the code below into the CoffeeResource endpoint:

public class CoffeeResource {
    ...
    @Path("/{id}/availability")
    @GET
    public Response availability(@PathParam("id") int id) {
        final Long invocationNumber = counter.getAndIncrement();

        Coffee coffee = coffeeRepository.getCoffeeById(id);
        // check that coffee with given id exists, return 404 if not
        if (coffee == null) {
            return Response.status(Response.Status.NOT_FOUND).build();
        }

        try {
            Integer availability = coffeeRepository.getAvailability(coffee);
            LOGGER.infof("CoffeeResource#availability() invocation #%d returning successfully", invocationNumber);
            return Response.ok(availability).build();
        } catch (RuntimeException e) {
            String message = e.getClass().getSimpleName() + ": " + e.getMessage();
            LOGGER.errorf("CoffeeResource#availability() invocation #%d failed: %s", invocationNumber, message);
            return Response.status(Response.Status.INTERNAL_SERVER_ERROR)
                    .entity(message)
                    .type(MediaType.TEXT_PLAIN_TYPE)
                    .build();
        }
    }
    ...
}

Rebuild and redeploy the application.

We added another functionality - the application can return the amount of remaining packages of given coffee on our store (just a random number).

This time an artificial failure was introduced in the CDI bean: the CoffeeRepositoryService#getAvailability() method is going to alternate between two successful and two failed invocations.

We also added a @CircuitBreaker annotation with requestVolumeThreshold = 4. CircuitBreaker.failureRatio is by default 0.5, and CircuitBreaker.delay is by default 5 seconds. That means that a circuit breaker will open when 2 of the last 4 invocations failed. It will stay open for 5 seconds.

To test this out, do the following:

  1. Go to http://localhost:8080/microprofile-fault-tolerance/coffee/2/availability in your browser. You should see a number being returned.

  2. Hit reload, this second request should again be successful and return a number.

  3. Reload two more times. Both times you should see text "RuntimeException: Service failed.", which is the exception thrown by CoffeeRepositoryService#getAvailability().

  4. Reload a couple more times. Unless you waited too long, you should again see exception, but this time it’s "CircuitBreakerOpenException: getAvailability". This exception indicates that the circuit breaker opened, and the CoffeeRepositoryService#getAvailability() method is not being called anymore.

  5. Give it 5 seconds during which circuit breaker should close. You should be able to make two successful requests again.

Working with the Completed Quickstart

This section shows how to work with the complete quickstart.

Build and Deploy the Quickstart

  1. Make sure WildFly server is started.

  2. Open a terminal and navigate to the root directory of this quickstart.

  3. Type the following command to build the quickstart.

    $ mvn clean package
  4. Type the following command to deploy the quickstart.

    $ mvn wildfly:deploy

This deploys the microprofile-fault-tolerance/target/microprofile-fault-tolerance.war to the running instance of the server.

You should see a message in the server log indicating that the archive deployed successfully.

Test the Deployed Application

You can visit following URLs in your browser:

Run the Integration Tests

This quickstart includes integration tests, which are located under the src/test/ directory. The integration tests verify that the quickstart runs correctly when deployed on the server.

Follow these steps to run the integration tests.

  1. Make sure WildFly server is started.

  2. Make sure the quickstart is deployed.

  3. Type the following command to run the verify goal with the integration-testing profile activated.

    $ mvn verify -Pintegration-testing 

Undeploy the Quickstart

When you are finished testing the quickstart, follow these steps to undeploy the archive.

  1. Make sure WildFly server is started.

  2. Open a terminal and navigate to the root directory of this quickstart.

  3. Type this command to undeploy the archive:

    $ mvn wildfly:undeploy

Building and running the quickstart application in a bootable JAR

You can use the WildFly JAR Maven plug-in to build a WildFly bootable JAR to run this quickstart.

The quickstart pom.xml file contains a Maven profile named bootable-jar which configures the bootable JAR building:

      <profile>
          <id>bootable-jar</id>
          <build>
              <plugins>
                  <plugin>
                      <groupId>org.wildfly.plugins</groupId>
                      <artifactId>wildfly-maven-plugin</artifactId>
                      <configuration>
                          <discover-provisioning-info>
                              <version>${version.server}</version>
                          </discover-provisioning-info>
                          <bootable-jar>true</bootable-jar>
                          <!--
                            Rename the output war to ROOT.war before adding it to the server, so that the
                            application is deployed in the root web context.
                          -->
                          <name>ROOT.war</name>
                          <add-ons>...</add-ons>
                      </configuration>
                      <executions>
                          <execution>
                              <goals>
                                  <goal>package</goal>
                              </goals>
                          </execution>
                      </executions>
                  </plugin>
                  ...
              </plugins>
          </build>
      </profile>

The plugin uses WildFly Glow to discover the feature packs and layers required to run the application, and provisions a server containing those layers.

If you get an error or the server is missing some functionality which cannot be auto-discovered, you can download the WildFly Glow CLI and run the following command to see more information about what add-ons are available:

wildfly-glow show-add-ons
Procedure
  1. Build the quickstart bootable JAR with the following command:

    $ mvn clean package -Pbootable-jar
  2. Run the quickstart application contained in the bootable JAR:

    $ java -jar target/microprofile-fault-tolerance-bootable.jar
  3. You can now interact with the quickstart application.

Note

After the quickstart application is deployed, the bootable JAR includes the application in the root context. Therefore, any URLs related to the application should not have the /microprofile-fault-tolerance path segment after HOST:PORT.

Run the Integration Tests with a bootable jar

The integration tests included with this quickstart, which verify that the quickstart runs correctly, may also be run with a bootable jar.

Follow these steps to run the integration tests.

  1. Make sure the bootable jar is provisioned.

    $ mvn clean package -Pbootable-jar
  2. Start the WildFly bootable jar, this time using the WildFly Maven Jar Plugin, which is recommend for testing due to simpler automation.

    $ mvn wildfly:start-jar
  3. Type the following command to run the verify goal with the integration-testing profile activated, and specifying the quickstart’s URL using the server.host system property, which for a bootable jar by default is http://localhost:8080.

    $ mvn verify -Pintegration-testing -Dserver.host=http://localhost:8080
  4. Shutdown the WildFly bootable jar, this time using the WildFly Maven Jar Plugin too.

    $ mvn wildfly:shutdown

Building and running the quickstart application with OpenShift

Build the WildFly Source-to-Image (S2I) Quickstart to OpenShift with Helm Charts

On OpenShift, the S2I build with Apache Maven uses an openshift Maven profile to provision a WildFly server, deploy and run the quickstart in OpenShift environment.

The server provisioning functionality is provided by the WildFly Maven Plugin, and you may find its configuration in the quickstart pom.xml:

        <profile>
            <id>openshift</id>
            <build>
                <plugins>
                    <plugin>
                        <groupId>org.wildfly.plugins</groupId>
                        <artifactId>wildfly-maven-plugin</artifactId>
                        <configuration>
                            <discover-provisioning-info>
                                <version>${version.server}</version>
                                <context>cloud</context>
                            </discover-provisioning-info>
                            <!--
                                The parent POM's 'openshift' profile renames the output archive to ROOT.war so that the
                                application is deployed in the root web context. Add ROOT.war to the server.
                            -->
                            <filename>ROOT.war</filename>
                            <add-ons>...</add-ons>
                        </configuration>
                        <executions>
                            <execution>
                                <goals>
                                    <goal>package</goal>
                                </goals>
                            </execution>
                        </executions>
                    </plugin>
                    ...
                </plugins>
            </build>
        </profile>

You may note that unlike the provisioned-server profile it uses the cloud context which enables a configuration tuned for OpenShift environment.

The plugin uses WildFly Glow to discover the feature packs and layers required to run the application, and provisions a server containing those layers.

If you get an error or the server is missing some functionality which cannot be auto-discovered, you can download the WildFly Glow CLI and run the following command to see more information about what add-ons are available:

wildfly-glow show-add-ons

Getting Started with WildFly for OpenShift and Helm Charts

This section contains the basic instructions to build and deploy this quickstart to WildFly for OpenShift or WildFly for OpenShift Online using Helm Charts.

Prerequisites

  • You must be logged in OpenShift and have an oc client to connect to OpenShift

  • Helm must be installed to deploy the backend on OpenShift.

Once you have installed Helm, you need to add the repository that provides Helm Charts for WildFly.

$ helm repo add wildfly https://docs.wildfly.org/wildfly-charts/
"wildfly" has been added to your repositories
$ helm search repo wildfly
NAME                    CHART VERSION   APP VERSION     DESCRIPTION
wildfly/wildfly         ...             ...            Build and Deploy WildFly applications on OpenShift
wildfly/wildfly-common  ...             ...            A library chart for WildFly-based applications

Deploy the WildFly Source-to-Image (S2I) Quickstart to OpenShift with Helm Charts

Log in to your OpenShift instance using the oc login command. The backend will be built and deployed on OpenShift with a Helm Chart for WildFly.

Navigate to the root directory of this quickstart and run the following command:

$ helm install microprofile-fault-tolerance -f charts/helm.yaml wildfly/wildfly --wait --timeout=10m0s 
NAME: microprofile-fault-tolerance
...
STATUS: deployed
REVISION: 1

This command will return once the application has successfully deployed. In case of a timeout, you can check the status of the application with the following command in another terminal:

oc get deployment microprofile-fault-tolerance

The Helm Chart for this quickstart contains all the information to build an image from the source code using S2I on Java 17:

build:
  uri: https://github.com/wildfly/quickstart.git
  ref: main
  contextDir: microprofile-fault-tolerance
deploy:
  replicas: 1

This will create a new deployment on OpenShift and deploy the application.

If you want to see all the configuration elements to customize your deployment you can use the following command:

$ helm show readme wildfly/wildfly

Get the URL of the route to the deployment.

$ oc get route microprofile-fault-tolerance -o jsonpath="{.spec.host}"

Access the application in your web browser using the displayed URL.

Note

The Maven profile named openshift is used by the Helm chart to provision the server with the quickstart deployed on the root web context, and thus the application should be accessed with the URL without the /microprofile-fault-tolerance path segment after HOST:PORT.

Run the Integration Tests with OpenShift

The integration tests included with this quickstart, which verify that the quickstart runs correctly, may also be run with the quickstart running on OpenShift.

Note

The integration tests expect a deployed application, so make sure you have deployed the quickstart on OpenShift before you begin.

Run the integration tests using the following command to run the verify goal with the integration-testing profile activated and the proper URL:

$ mvn verify -Pintegration-testing -Dserver.host=https://$(oc get route microprofile-fault-tolerance --template='{{ .spec.host }}') 
Note

The tests are using SSL to connect to the quickstart running on OpenShift. So you need the certificates to be trusted by the machine the tests are run from.

Undeploy the WildFly Source-to-Image (S2I) Quickstart from OpenShift with Helm Charts

$ helm uninstall microprofile-fault-tolerance

Building and running the quickstart application with Kubernetes

Build the WildFly Quickstart to Kubernetes with Helm Charts

For Kubernetes, the build with Apache Maven uses an openshift Maven profile to provision a WildFly server, suitable for running on Kubernetes.

The server provisioning functionality is provided by the WildFly Maven Plugin, and you may find its configuration in the quickstart pom.xml:

        <profile>
            <id>openshift</id>
            <build>
                <plugins>
                    <plugin>
                        <groupId>org.wildfly.plugins</groupId>
                        <artifactId>wildfly-maven-plugin</artifactId>
                        <configuration>
                            <discover-provisioning-info>
                                <version>${version.server}</version>
                                <context>cloud</context>
                            </discover-provisioning-info>
                            <!--
                                The parent POM's 'openshift' profile renames the output archive to ROOT.war so that the
                                application is deployed in the root web context. Add ROOT.war to the server.
                            -->
                            <filename>ROOT.war</filename>
                            <add-ons>...</add-ons>
                        </configuration>
                        <executions>
                            <execution>
                                <goals>
                                    <goal>package</goal>
                                </goals>
                            </execution>
                        </executions>
                    </plugin>
                    ...
                </plugins>
            </build>
        </profile>

You may note that unlike the provisioned-server profile it uses the cloud context which enables a configuration tuned for Kubernetes environment.

The plugin uses WildFly Glow to discover the feature packs and layers required to run the application, and provisions a server containing those layers.

If you get an error or the server is missing some functionality which cannot be auto-discovered, you can download the WildFly Glow CLI and run the following command to see more information about what add-ons are available:

wildfly-glow show-add-ons

Getting Started with Kubernetes and Helm Charts

This section contains the basic instructions to build and deploy this quickstart to Kubernetes using Helm Charts.

Install Kubernetes

In this example we are using Minikube as our Kubernetes provider. See the Minikube Getting Started guide for how to install it. After installing it, we start it with 4GB of memory.

minikube start --memory='4gb'

The above command should work if you have Docker installed on your machine. If, you are using Podman instead of Docker, you will also need to pass in --driver=podman, as covered in the Minikube documentation.

Once Minikube has started, we need to enable its registry since that is where we will push the image needed to deploy the quickstart, and where we will tell the Helm charts to download it from.

minikube addons enable registry

In order to be able to push images to the registry we need to make it accessible from outside Kubernetes. How we do this depends on your operating system. All the below examples will expose it at localhost:5000

# On Mac:
docker run --rm -it --network=host alpine ash -c "apk add socat && socat TCP-LISTEN:5000,reuseaddr,fork TCP:$(minikube ip):5000"

# On Linux:
kubectl port-forward --namespace kube-system service/registry 5000:80 &

# On Windows:
kubectl port-forward --namespace kube-system service/registry 5000:80
docker run --rm -it --network=host alpine ash -c "apk add socat && socat TCP-LISTEN:5000,reuseaddr,fork TCP:host.docker.internal:5000"

Prerequisites

  • Helm must be installed to deploy the backend on Kubernetes.

Once you have installed Helm, you need to add the repository that provides Helm Charts for WildFly.

$ helm repo add wildfly https://docs.wildfly.org/wildfly-charts/
"wildfly" has been added to your repositories
$ helm search repo wildfly
NAME                    CHART VERSION   APP VERSION     DESCRIPTION
wildfly/wildfly         ...             ...            Build and Deploy WildFly applications on OpenShift
wildfly/wildfly-common  ...             ...            A library chart for WildFly-based applications

Deploy the WildFly Source-to-Image (S2I) Quickstart to Kubernetes with Helm Charts

The backend will be built and deployed on Kubernetes with a Helm Chart for WildFly.

Navigate to the root directory of this quickstart and run the following commands:

mvn -Popenshift package wildfly:image

This will use the openshift Maven profile we saw earlier to build the application, and create a Docker image containing the WildFly server with the application deployed. The name of the image will be microprofile-fault-tolerance.

Next we need to tag the image and make it available to Kubernetes. You can push it to a registry like quay.io. In this case we tag as localhost:5000/microprofile-fault-tolerance:latest and push it to the internal registry in our Kubernetes instance:

# Tag the image
docker tag microprofile-fault-tolerance localhost:5000/microprofile-fault-tolerance:latest
# Push the image to the registry
docker push localhost:5000/microprofile-fault-tolerance:latest

In the below call to helm install which deploys our application to Kubernetes, we are passing in some extra arguments to tweak the Helm build:

  • --set build.enabled=false - This turns off the s2i build for the Helm chart since Kubernetes, unlike OpenShift, does not have s2i. Instead, we are providing the image to use.

  • --set deploy.route.enabled=false - This disables route creation normally performed by the Helm chart. On Kubernetes we will use port-forwards instead to access our application, since routes are an OpenShift specific concept and thus not available on Kubernetes.

  • --set image.name="localhost:5000/microprofile-fault-tolerance" - This tells the Helm chart to use the image we built, tagged and pushed to Kubernetes' internal registry above.

$ helm install microprofile-fault-tolerance -f charts/helm.yaml wildfly/wildfly --wait --timeout=10m0s --set build.enabled=false --set deploy.route.enabled=false --set image.name="localhost:5000/microprofile-fault-tolerance"
NAME: microprofile-fault-tolerance
...
STATUS: deployed
REVISION: 1

This command will return once the application has successfully deployed. In case of a timeout, you can check the status of the application with the following command in another terminal:

kubectl get deployment microprofile-fault-tolerance

The Helm Chart for this quickstart contains all the information to build an image from the source code using S2I on Java 17:

build:
  uri: https://github.com/wildfly/quickstart.git
  ref: main
  contextDir: microprofile-fault-tolerance
deploy:
  replicas: 1

This will create a new deployment on Kubernetes and deploy the application.

If you want to see all the configuration elements to customize your deployment you can use the following command:

$ helm show readme wildfly/wildfly

To be able to connect to our application running in Kubernetes from outside, we need to set up a port-forward to the microprofile-fault-tolerance service created for us by the Helm chart.

This service will run on port 8080, and we set up the port forward to also run on port 8080:

kubectl port-forward service/microprofile-fault-tolerance 8080:8080

The server can now be accessed via http://localhost:8080 from outside Kubernetes. Note that the command to create the port-forward will not return, so it is easiest to run this in a separate terminal.

Note

The Maven profile named openshift is used by the Helm chart to provision the server with the quickstart deployed on the root web context, and thus the application should be accessed with the URL without the /microprofile-fault-tolerance path segment after HOST:PORT.

Run the Integration Tests with Kubernetes

The integration tests included with this quickstart, which verify that the quickstart runs correctly, may also be run with the quickstart running on Kubernetes.

Note

The integration tests expect a deployed application, so make sure you have deployed the quickstart on Kubernetes before you begin.

Run the integration tests using the following command to run the verify goal with the integration-testing profile activated and the proper URL:

$ mvn verify -Pintegration-testing -Dserver.host=http://localhost:8080 

Undeploy the WildFly Source-to-Image (S2I) Quickstart from Kubernetes with Helm Charts

$ helm uninstall microprofile-fault-tolerance

To stop the port forward you created earlier use:

$ kubectl port-forward service/microprofile-fault-tolerance 8080:8080

Conclusion

MicroProfile Fault Tolerance allows improving resiliency of your application, without having an impact on the complexity of our business logic.