The todo-backend quickstart demonstrates how to implement a backend that exposes a HTTP API with JAX-RS to manage a list of ToDo which are persisted in a database with JPA.

This quickstart shows how to setup a local deployment of this backend as well as a deployment on OpenShift to connect to a PostgreSQL database also hosted on OpenShift.

What is it?

The todo-backend quickstart demonstrates how to implement a backend that exposes a HTTP API with JAX-RS to manage a list of ToDo which are persisted in a database with JPA.

  • The backend exposes a HTTP API to manage a list of todos that complies with the specs defined at todobackend.com.

  • It requires a connection to a PostgreSQL database to persist the todos.

  • It uses the Server Provisioning for local and cloud deployment

  • It can be build with WildFly S2I images for cloud deployment

  • It is deployed on OpenShift using the Helm Chart for WildFly.

System Requirements

The application this project produces is designed to be run on WildFly Application Server 33 or later.

All you need to build this project is Java 11.0 (Java SDK 11) or later and Maven 3.6.0 or later. See Configure Maven to Build and Deploy the Quickstarts to make sure you are configured correctly for testing the quickstarts.

Architecture

Architecture with S2I

This backend is built using WildFly S2I Builder and Runtime images.

When the image is built, org.wildfly.plugins:wildfly-maven-plugin plugin provisions the WildFly application server and all the feature packs it needs for its features. The layers are defined in the pom.xml file in the <configuration> section of the org.wildfly.plugins:wildfly-maven-plugin plugin:

<layers>
  <layer>cloud-server</layer>
  <layer>postgresql-datasource</layer>
</layers>

The cloud-server layer provides everything needed to run the backend on OpenShift. This also includes access to Jakarta EE APIs such as CDI, JAX-RS, JPA, etc. These two layers comes from the WildFly feature pack provided in the WildFly S2I builder image.

The postgresql-datasource layer provides a JDBC driver and DataSource to connect to a PostgreSQL database. It is also provided by org.wildfly:wildfly-datasources-galleon-pack which is included in the WildFly S2I image.

The Git repository for this feature pack is hosted at https://github.com/wildfly-extras/wildfly-datasources-galleon-pack. It provides JDBC drivers and datasources for different databases but for this quickstart, we will only need the postgresql-datasource.

Connection to the PostgreSQL database

As mentioned, the JDBC drivers and datasource configuration that the backend uses to connect to the PostgreSQL database is provided by the org.wildfly:wildfly-datasources-galleon-pack feature pack.

By default, it exposes a single datasource. In the backend, the name of this datasource is ToDos and is specified in the persistence.xml to configure JPA:

<persistence-unit name="primary">
  <jta-data-source>java:jboss/datasources/ToDos</jta-data-source>
</persistence-unit>

At runtime, we only need a few environment variables to establish the connection from WildFly to the external PostgreSQL database:

  • POSTGRESQL_DATABASE - the name of the database (that will be called todos)

  • POSTGRESQL_SERVICE_HOST - the host to connect to the database

  • POSTGRESQL_SERVICE_PORT - The port to connect to the database

  • POSTGRESQL_USER & POSTGRESQL_PASSWORD - the credentials to connect to the database

  • POSTGRESQL_DATASOURCE - The name of the datasources (as mentioned above, it will be ToDos)

Filters for Cross-Origin Resource Sharing (CORS)

The Web frontend for this quickstart uses JavaScript calls to query the backend’s HTTP API. We must enable Cross-Origin Resource Sharing (CORS) filters in the undertow subsystem of WildFly to allow these HTTP requests to succeed.

This script is executed at build time and will provide the following HTTP headers to enabled CORS:

  • Access-Control-Allow-Origin: *

  • Access-Control-Allow-Methods: GET, POST, OPTION, PUT, DELETE, PATCH

  • Access-Control-Allow-Headers: accept, authorization, content-type, x-requested-with

  • Access-Control-Allow-Credentials: true

  • Access-Control-Max-Age: 1

By default, the backend accepts requests from any origin (*). This is only simplicity. It is possible to restrict the allowed origin using the environment variable CORS_ORIGIN at runtime.

Run the Backend Locally

Package the Backend

The backend is packaged and deployed on a provisioned server:

$ mvn clean package -Pprovisioned-server

Run a Local PostgreSQL Database

Before running the backend locally, we need to have a local PostgreSQL database that we can connect to. We use the postgresql docker image to create one:

$ docker run --name todo-backend-db -e POSTGRES_USER=todos -e POSTGRES_PASSWORD=mysecretpassword -p 5432:5432 postgres

This will create a database named todos that we can connect to on localhost:5432 with the credentials todos / mysecretpassword.

Run the Application

With the PostgreSQL database running, we can start the backend by passing the required environment variables to connect to the database:

$ ./target/server/bin/standalone.sh -Denv.POSTGRESQL_DATABASE=todos -Denv.POSTGRESQL_DATASOURCE=ToDos -Denv.POSTGRESQL_SERVICE_HOST=localhost -Denv.POSTGRESQL_SERVICE_PORT=5432 -Denv.POSTGRESQL_USER=todos -Denv.POSTGRESQL_PASSWORD=mysecretpassword

The backend is running, and we can use the HTTP API to manage a list of todos:

# get a list of todos
$ curl http://localhost:8080
[]

# create a todo with the title "This is my first todo item!"
$ curl -X POST -H "Content-Type: application/json"  -d '{"title": "This is my first todo item!"}' http://localhost:8080
{"completed":false,"id":1,"order":0,"title":"This is my first todo item!","url":"https://localhost:8080/1"}%

# get a list of todos with the one that was just created
$ curl http://localhost:8080
[{"completed":false,"id":1,"order":0,"title":"This is my first todo item!","url":"https://localhost:8080/1"}]

Run the Integration Tests with a provisioned server

The integration tests included with this quickstart, which verify that the quickstart runs correctly, may also be run with a provisioned server.

Follow these steps to run the integration tests.

  1. Make sure the server is provisioned.

    $ mvn clean package -Pprovisioned-server
  2. Start the WildFly provisioned server, this time using the WildFly Maven Plugin, which is recommended for testing due to simpler automation. The path to the provisioned server should be specified using the jbossHome system property.

    $ mvn wildfly:start -DjbossHome=target/server -DPOSTGRESQL_DATABASE=todos -DPOSTGRESQL_SERVICE_HOST=localhost -DPOSTGRESQL_SERVICE_PORT=5432 -DPOSTGRESQL_USER=todos -DPOSTGRESQL_PASSWORD=mysecretpassword -DPOSTGRESQL_DATASOURCE=ToDos
  3. Type the following command to run the verify goal with the integration-testing profile activated, and specifying the quickstart’s URL using the server.host system property, which for a provisioned server by default is http://localhost:8080.

    $ mvn verify -Pintegration-testing -Dserver.host=http://localhost:8080 
  4. Shutdown the WildFly provisioned server, this time using the WildFly Maven Plugin too.

    $ mvn wildfly:shutdown

Run the Backend on OpenShift

Building and running the quickstart application with OpenShift

Build the WildFly Source-to-Image (S2I) Quickstart to OpenShift with Helm Charts

On OpenShift, the S2I build with Apache Maven uses an openshift Maven profile to provision a WildFly server, deploy and run the quickstart in OpenShift environment.

The server provisioning functionality is provided by the WildFly Maven Plugin, and you may find its configuration in the quickstart pom.xml:

        <profile>
            <id>openshift</id>
            <build>
                <plugins>
                    <plugin>
                        <groupId>org.wildfly.plugins</groupId>
                        <artifactId>wildfly-maven-plugin</artifactId>
                        <configuration>
                            <discover-provisioning-info>
                                <version>${version.server}</version>
                                <context>cloud</context>
                            </discover-provisioning-info>
                            <!--
                                The parent POM's 'openshift' profile renames the output archive to ROOT.war so that the
                                application is deployed in the root web context. Add ROOT.war to the server.
                            -->
                            <filename>ROOT.war</filename>
                            <add-ons>...</add-ons>
                        </configuration>
                        <executions>
                            <execution>
                                <goals>
                                    <goal>package</goal>
                                </goals>
                            </execution>
                        </executions>
                    </plugin>
                    ...
                </plugins>
            </build>
        </profile>

You may note that unlike the provisioned-server profile it uses the cloud context which enables a configuration tuned for OpenShift environment.

The plugin uses WildFly Glow to discover the feature packs and layers required to run the application, and provisions a server containing those layers.

If you get an error or the server is missing some functionality which cannot be auto-discovered, you can download the WildFly Glow CLI and run the following command to see more information about what add-ons are available:

wildfly-glow show-add-ons

Getting Started with WildFly for OpenShift and Helm Charts

This section contains the basic instructions to build and deploy this quickstart to WildFly for OpenShift or WildFly for OpenShift Online using Helm Charts.

Prerequisites

  • You must be logged in OpenShift and have an oc client to connect to OpenShift

  • Helm must be installed to deploy the backend on OpenShift.

Once you have installed Helm, you need to add the repository that provides Helm Charts for WildFly.

$ helm repo add wildfly https://docs.wildfly.org/wildfly-charts/
"wildfly" has been added to your repositories
$ helm search repo wildfly
NAME                    CHART VERSION   APP VERSION     DESCRIPTION
wildfly/wildfly         ...             ...            Build and Deploy WildFly applications on OpenShift
wildfly/wildfly-common  ...             ...            A library chart for WildFly-based applications

Add the bitnami repository which provides an helm chart for PostgreSQL:

$ helm repo add bitnami https://charts.bitnami.com/bitnami
"bitnami" has been added to your repositories

The Helm Chart for this quickstart contains all the information to build an image from the source code using S2I and install it with the database:

dependencies:
    - name: postgresql
      repository: https://charts.bitnami.com/bitnami
      version: ...
    - name: wildfly
      repository: http://docs.wildfly.org/wildfly-charts/
      version: ...

So we need to update the dependecies of our Helm Chart.

$ helm dependency update charts/

Deploy the WildFly Source-to-Image (S2I) Quickstart to OpenShift with Helm Charts

Log in to your OpenShift instance using the oc login command. The backend will be built and deployed on OpenShift with a Helm Chart for WildFly.

Navigate to the root directory of this quickstart and run the following command:

$ helm install todo-backend charts --wait --timeout=10m0s 
NAME: todo-backend
...
STATUS: deployed
REVISION: 1

This command will return once the application has successfully deployed. In case of a timeout, you can check the status of the application with the following command in another terminal:

oc get deployment todo-backend

The Helm Chart for this quickstart contains all the information to build an image from the source code using S2I on Java 17:

apiVersion: v2
name: todo-backend-chart
description: A Helm chart to deploy a WildFly todo-backend application and its Postgresql database
type: application
version: 1.0.0
appVersion: 31.0.0.Final
dependencies:
    - name: postgresql
      repository: https://charts.bitnami.com/bitnami
      version: 13.1.5
    - name: wildfly
      repository: http://docs.wildfly.org/wildfly-charts/
      version: 2.3.2

This will create a new deployment on OpenShift and deploy the application.

If you want to see all the configuration elements to customize your deployment you can use the following command:

$ helm show readme wildfly/wildfly

Get the URL of the route to the deployment.

$ oc get route todo-backend -o jsonpath="{.spec.host}"

Access the application in your web browser using the displayed URL.

The Maven profile named openshift is used by the Helm chart to provision the server with the quickstart deployed on the root web context, and thus the application should be accessed with the URL without the /todo-backend path segment after HOST:PORT.

Run the Integration Tests with OpenShift

The integration tests included with this quickstart, which verify that the quickstart runs correctly, may also be run with the quickstart running on OpenShift.

The integration tests expect a deployed application, so make sure you have deployed the quickstart on OpenShift before you begin.

Run the integration tests using the following command to run the verify goal with the integration-testing profile activated and the proper URL:

$ mvn verify -Pintegration-testing -Dserver.host=https://$(oc get route todo-backend --template='{{ .spec.host }}') 

The tests are using SSL to connect to the quickstart running on OpenShift. So you need the certificates to be trusted by the machine the tests are run from.

Undeploy the WildFly Source-to-Image (S2I) Quickstart from OpenShift with Helm Charts

$ helm uninstall todo-backend

Environment variables for PostgreSQL

The Helm Chart also contains the environment variables required to connect to the PostgreSQL database.

In local deployment the credentials were passed directly as the values of the environment variables.

For OpenShift, we rely on secrets so that the credentials are never copied outside OpenShift:

deploy:
  env:
    - name: POSTGRESQL_PASSWORD
      valueFrom:
        secretKeyRef:
          key: database-password
          name: todo-backend-db

When the application is deployed, the value for the POSTGRESQL_PASSWORD will be taken from the key database-password in the secret todo-backend-db.

Use the todobackend Web Frontend

Once the backend is deployed on OpenShift, it can be accessed from the route todo-backend. Let’s find the host that we can use to connect to this backend:

$ oc get route todo-backend -o jsonpath="{.spec.host}"
todo-backend-jmesnil1-dev.apps.sandbox.x8i5.p1.openshiftapps.com

This value will be different for every installation of the backend.

Make sure to prepend the host with https:// to be able to connect to the backend from the ToDo Backend Specs or Client. The host must also be publicly accessible.

We can verify that this application is properly working as a ToDo Backend by running its specs on it.

Once all tests passed, we can use the todobackend client to have a Web application connected to the backend.

todobackend.com is an external service used to showcase this quickstart. It might not always be functional but does not impact the availability of this backend.

Clean Up

Remove the Backend

The backend can be deleted from OpenShift by running the command:

$ helm uninstall todo-backend
release "todo-backend" uninstalled

Run the Backend on Kubernetes

Building and running the quickstart application with Kubernetes

Build the WildFly Quickstart to Kubernetes with Helm Charts

For Kubernetes, the build with Apache Maven uses an openshift Maven profile to provision a WildFly server, suitable for running on Kubernetes.

The server provisioning functionality is provided by the WildFly Maven Plugin, and you may find its configuration in the quickstart pom.xml:

        <profile>
            <id>openshift</id>
            <build>
                <plugins>
                    <plugin>
                        <groupId>org.wildfly.plugins</groupId>
                        <artifactId>wildfly-maven-plugin</artifactId>
                        <configuration>
                            <discover-provisioning-info>
                                <version>${version.server}</version>
                                <context>cloud</context>
                            </discover-provisioning-info>
                            <!--
                                The parent POM's 'openshift' profile renames the output archive to ROOT.war so that the
                                application is deployed in the root web context. Add ROOT.war to the server.
                            -->
                            <filename>ROOT.war</filename>
                            <add-ons>...</add-ons>
                        </configuration>
                        <executions>
                            <execution>
                                <goals>
                                    <goal>package</goal>
                                </goals>
                            </execution>
                        </executions>
                    </plugin>
                    ...
                </plugins>
            </build>
        </profile>

You may note that unlike the provisioned-server profile it uses the cloud context which enables a configuration tuned for Kubernetes environment.

The plugin uses WildFly Glow to discover the feature packs and layers required to run the application, and provisions a server containing those layers.

If you get an error or the server is missing some functionality which cannot be auto-discovered, you can download the WildFly Glow CLI and run the following command to see more information about what add-ons are available:

wildfly-glow show-add-ons

Getting Started with Kubernetes and Helm Charts

This section contains the basic instructions to build and deploy this quickstart to Kubernetes using Helm Charts.

Install Kubernetes

In this example we are using Minikube as our Kubernetes provider. See the Minikube Getting Started guide for how to install it. After installing it, we start it with 4GB of memory.

minikube start --memory='4gb'

The above command should work if you have Docker installed on your machine. If, you are using Podman instead of Docker, you will also need to pass in --driver=podman, as covered in the Minikube documentation.

Once Minikube has started, we need to enable its registry since that is where we will push the image needed to deploy the quickstart, and where we will tell the Helm charts to download it from.

minikube addons enable registry

In order to be able to push images to the registry we need to make it accessible from outside Kubernetes. How we do this depends on your operating system. All the below examples will expose it at localhost:5000

# On Mac:
docker run --rm -it --network=host alpine ash -c "apk add socat && socat TCP-LISTEN:5000,reuseaddr,fork TCP:$(minikube ip):5000"

# On Linux:
kubectl port-forward --namespace kube-system service/registry 5000:80 &

# On Windows:
kubectl port-forward --namespace kube-system service/registry 5000:80
docker run --rm -it --network=host alpine ash -c "apk add socat && socat TCP-LISTEN:5000,reuseaddr,fork TCP:host.docker.internal:5000"

Prerequisites

  • Helm must be installed to deploy the backend on Kubernetes.

Once you have installed Helm, you need to add the repository that provides Helm Charts for WildFly.

$ helm repo add wildfly https://docs.wildfly.org/wildfly-charts/
"wildfly" has been added to your repositories
$ helm search repo wildfly
NAME                    CHART VERSION   APP VERSION     DESCRIPTION
wildfly/wildfly         ...             ...            Build and Deploy WildFly applications on OpenShift
wildfly/wildfly-common  ...             ...            A library chart for WildFly-based applications

Add the bitnami repository which provides an helm chart for PostgreSQL:

$ helm repo add bitnami https://charts.bitnami.com/bitnami
"bitnami" has been added to your repositories

The Helm Chart for this quickstart contains all the information to build an image from the source code using S2I and install it with the database:

dependencies:
    - name: postgresql
      repository: https://charts.bitnami.com/bitnami
      version: ...
    - name: wildfly
      repository: http://docs.wildfly.org/wildfly-charts/
      version: ...

So we need to update the dependecies of our Helm Chart.

$ helm dependency update charts/

Deploy the WildFly Source-to-Image (S2I) Quickstart to Kubernetes with Helm Charts

The backend will be built and deployed on Kubernetes with a Helm Chart for WildFly.

Navigate to the root directory of this quickstart and run the following commands:

mvn -Popenshift package wildfly:image

This will use the openshift Maven profile we saw earlier to build the application, and create a Docker image containing the WildFly server with the application deployed. The name of the image will be todo-backend.

Next we need to tag the image and make it available to Kubernetes. You can push it to a registry like quay.io. In this case we tag as localhost:5000/todo-backend:latest and push it to the internal registry in our Kubernetes instance:

# Tag the image
docker tag todo-backend localhost:5000/todo-backend:latest
# Push the image to the registry
docker push localhost:5000/todo-backend:latest

In the below call to helm install which deploys our application to Kubernetes, we are passing in some extra arguments to tweak the Helm build:

  • --set wildfly.build.enabled=false - This turns off the s2i build for the Helm chart since Kubernetes, unlike OpenShift, does not have s2i. Instead, we are providing the image to use.

  • --set wildfly.deploy.route.enabled=false - This disables route creation normally performed by the Helm chart. On Kubernetes we will use port-forwards instead to access our application, since routes are an OpenShift specific concept and thus not available on Kubernetes.

  • --set wildfly.image.name="localhost:5000/todo-backend" - This tells the Helm chart to use the image we built, tagged and pushed to Kubernetes' internal registry above.

$ helm install todo-backend charts --wait --timeout=10m0s --set wildfly.build.enabled=false --set wildfly.deploy.route.enabled=false --set wildfly.image.name="localhost:5000/todo-backend"
NAME: todo-backend
...
STATUS: deployed
REVISION: 1

This command will return once the application has successfully deployed. In case of a timeout, you can check the status of the application with the following command in another terminal:

kubectl get deployment todo-backend

The Helm Chart for this quickstart contains all the information to build an image from the source code using S2I on Java 17:

apiVersion: v2
name: todo-backend-chart
description: A Helm chart to deploy a WildFly todo-backend application and its Postgresql database
type: application
version: 1.0.0
appVersion: 31.0.0.Final
dependencies:
    - name: postgresql
      repository: https://charts.bitnami.com/bitnami
      version: 13.1.5
    - name: wildfly
      repository: http://docs.wildfly.org/wildfly-charts/
      version: 2.3.2

This will create a new deployment on Kubernetes and deploy the application.

If you want to see all the configuration elements to customize your deployment you can use the following command:

$ helm show readme wildfly/wildfly

To be able to connect to our application running in Kubernetes from outside, we need to set up a port-forward to the todo-backend service created for us by the Helm chart.

This service will run on port 8080, and we set up the port forward to also run on port 8080:

kubectl port-forward service/todo-backend 8080:8080

The server can now be accessed via http://localhost:8080 from outside Kubernetes. Note that the command to create the port-forward will not return, so it is easiest to run this in a separate terminal.

The Maven profile named openshift is used by the Helm chart to provision the server with the quickstart deployed on the root web context, and thus the application should be accessed with the URL without the /todo-backend path segment after HOST:PORT.

Run the Integration Tests with Kubernetes

The integration tests included with this quickstart, which verify that the quickstart runs correctly, may also be run with the quickstart running on Kubernetes.

The integration tests expect a deployed application, so make sure you have deployed the quickstart on Kubernetes before you begin.

Run the integration tests using the following command to run the verify goal with the integration-testing profile activated and the proper URL:

$ mvn verify -Pintegration-testing -Dserver.host=http://localhost:8080 

Undeploy the WildFly Source-to-Image (S2I) Quickstart from Kubernetes with Helm Charts

$ helm uninstall todo-backend

To stop the port forward you created earlier use:

$ kubectl port-forward service/todo-backend 8080:8080

Environment variables for PostgreSQL

The Helm Chart also contains the environment variables required to connect to the PostgreSQL database.

In local deployment the credentials were passed directly as the values of the environment variables.

For Kubernetes, we rely on secrets so that the credentials are never copied outside Kubernetes:

deploy:
  env:
    - name: POSTGRESQL_PASSWORD
      valueFrom:
        secretKeyRef:
          key: database-password
          name: todo-backend-db

When the application is deployed, the value for the POSTGRESQL_PASSWORD will be taken from the key database-password in the secret todo-backend-db.

Clean Up

Remove the Backend

The backend can be deleted from Kubernetes by running the command:

$ helm uninstall todo-backend
release "todo-backend" uninstalled

Conclusion

This quickstart shows how the datasource feature pack provided by WildFly simplifies the deployment of a WildFly Jakarta EE backend on OpenShift to connect to an external database and exposes an HTTP API.

The use of a Server Provisioned deployment makes it seamless to move from a local deployment for development to a deployment on cloud platforms such as OpenShift and Kubernetes.