This guide documents the various features and capabilities provides by the WildFly Operator.

This guide is complemented by the API Documentation.

Basic Install (Phase I)

The features and capabilities of Basic Install (Phase I) deals with the provisioning, installation and configuration of a Java application managed by the WildFly Operator.

Specify the Docker Application Image

The applicationImage specifies the Docker application image that contains the Java application. The operator support two kind of application images:

  • WildFly S2I Builder/Runtime application image. This application image is built by using WildFly S2I. This is the default one.

  • Bootable Jar application image. This application image is built by using Bootable JAR

Example of application image configuration
spec:
  applicationImage: "quay.io/wildfly-quickstarts/wildfly-operator-quickstart:18.0"

The applicationImage accepts different references to Docker image:

  • the name of the image: quay.io/wildfly-quickstarts/wildfly-operator-quickstart

  • a tag: quay.io/wildfly-quickstarts/wildfly-operator-quickstart:18.0

  • a digest: quay.io/wildfly-quickstarts/wildfly-operator-quickstart@sha256:0af38bc38be93116b6a1d86a9c78bd14cd527121970899d719baf78e5dc7bfd2

  • an imagestream tag: wildfly-operator-quickstart:latest

Note

Using an imatestream tag is supported only on OpenShift and the imagestream must be in the same namespace that the deployed WildFlyServer resource. The WildFly Operator will trigger new deployment of the application whenever the image corresponding to the imagestream tag changes.

Specify a Bootable JAR application image

The bootableJar specifies whether the application image is a Bootable JAR server. If this configuration is unspecified, the WildFly Operator assumes that the application image is an S2I image.

Example of Bootable JAR configuration
spec:
  bootableJar: true

There are some restrictions about the base image you can use to launch the Bootable JAR:

  • The base image for the Bootable JAR container must be registry.access.redhat.com/ubi8/openjdk-11 or any other Red Hat ubi8 supplying a higher JDK version.

  • The Bootable JAR must be built for a server configured for the cloud environment. This is done by enabling the cloud configuration item on the wildfly-jar-maven-plugin configuration. See Configuring the server for cloud execution on the wildfly-jar-maven-plugin documentation.

Specify the Size of the Application

The replicas specifies the size of the application, i.e. the number of pods that runs the application image.

Example of size configuration
spec:
  replicas:2

Specify the Resource requirements for the container

The resources spec is defined in the Resources API Documentation.

Example of resources configuration
spec:
  resources:
    limits:
      cpu: 500m
      memory: 2Gi
    requests:
      cpu: 100m
      memory: 1Gi

For more examples visit:

Specify the Storage Requirements for the Server Data Directory

The storage defines the storage requirements for WildFly’s own data. The application may require persistent storage for some data (e.g. transaction or messaging logs) that must persist across Pod restarts.

If the storage spec is empty, an EmptyDir volume will be used by each pod of the application (but this volume will not persist after its corresponding pod is stopped).

Warning

Using the EmptyDir is especially unsafe for XA transaction processing. When the application uses the XA transactions then is required to configure the persistent volume claim (example below) or configure the transaction subsystem to store transaction log in database with use of JDBC store.

A volumeClaimTemplate can be specified to configure Resources requirements to store WildFly standalone data directory. The name of the template is derived from the WildFlyServer name. The corresponding volume will be mounted in ReadWriteOnce access mode.

The storage spec is defined in the StorageSpec API Documentation.

Example of storage requirement
spec:
  storage:
    volumeClaimTemplate:
      spec:
        resources:
          requests:
            storage: 3Gi

The persistent volume that meets this storage requirement is mounted on the /wildfly/standalone/data directory (corresponding to WildFly’s jboss.server.data.dir path).

Configure the Application Environment

Environment can be configured using the env spec. Environment variables can come directly from values (such as the POSTGRESQL_SERVICE_HOST example below) or from secrets (e.g. the POSTGRESQL_USER example below).

Example of environment configuration
spec:
  env:
  - name: POSTGRESQL_SERVICE_HOST
    value: postgresql
  - name: POSTGRESQL_SERVICE_PORT
    value: '5432'
  - name: POSTGRESQL_DATABASE
    valueFrom:
      secretKeyRef:
        key: database-name
        name: postgresql
  - name: POSTGRESQL_USER
    valueFrom:
      secretKeyRef:
        key: database-user
        name: postgresql
  - name: POSTGRESQL_PASSWORD
    valueFrom:
      secretKeyRef:
        key: database-password
        name: postgresql

Configure Secrets

Secrets can be mounted as volumes to be accessed from the application.

The secrets must be created before the WildFly Operator deploys the application. For example we can create a secret named my-secret with a command such as:

$ kubectl create secret generic my-secret --from-literal=my-key=devuser --from-literal=my-password='my-very-secure-pasword'

Once the secret has been created, we can specify its name in the WildFlyServer Spec to have it mounted as a volume in the pods running the application:

Example of mounting secrets
spec:
  secrets:
    - my-secret

The secrets will then be mounted under /etc/secrets/<secret name> and each key/value will be stored in a file (whose name is the key and the content is the value).

Secret is mounted as a volume inside the Pod
[jboss@quickstart-0 ~]$ ls /etc/secrets/my-secret/
my-key  my-password
[jboss@quickstart-0 ~]$ cat /etc/secrets/my-secret/my-key
devuser
[jboss@quickstart-0 ~]$ cat /etc/secrets/my-secret/my-password
my-very-secure-pasword

Configure ConfigMaps

ConfigMaps can be mounted as volumes to be accessed from the application.

The config maps must be created before the WildFly Operator deploys the application. For example we can create a config map named my-config with a command such as:

$ kubectl create configmap my-config --from-literal=key1=value1 --from-literal=key2=value2
configmap/my-config created

Once the config map has been created, we can specify its name in the WildFlyServer Spec to have it mounted as a volume in the pods running the application:

Example of mounting config maps
spec:
  configMaps:
  - my-config

The config maps will then be mounted under /etc/configmaps/<config map name> and each key/value will be stored in a file (whose name is the key and the content is the value).

Config Map is mounted as a volume inside the Pod
[jboss@quickstart-0 ~]$ ls /etc/configmaps/my-config/
key1 key2
[jboss@quickstart-0 ~]$ cat /etc/configmaps/my-config/key1
value1
[jboss@quickstart-0 ~]$ cat /etc/configmaps/my-config/key2
value2

Bring Your Own Standalone XML Configuration

It is possible to directly provide WildFly standalone configuration instead of the one in the application image (that comes from WildFly S2I).

The standalone XML file must be put in a ConfigMap that is accessible by the operator. The standaloneConfigMap must provide the name of this ConfigMap as well as the key corresponding to the name of standalone XML file.

Example of bringing its own standalone configuration
spec:
  standaloneConfigMap:
    name: clusterbench-config-map
    key: standalone-openshift.xml

In this example, the clusterbench-config-map must be created before the WildFly Operator deploys the application.

Example of creating a ConfigMap from a standalone XML file
$ kubectl create configmap clusterbench-config-map --from-file examples/clustering/config/standalone-openshift.xml
configmap/clusterbench-config-map created
Note

This feature is not supported by a Bootable JAR application image. If you enabled it, your application image will not be deployed in the cluster.

Monitor WildFly Metrics

If the Prometheus Operator is deployed on the cluster (and the ServiceMonitor Custom Resource Definition is installed), the WildFly operator automatically creates a ServiceMonitor to expose its metrics to Prometheus.

OpenShift Features

Some Operator features are only available when running on OpenShift if Kubernetes does not provide the required resources to activate these features.

Seamless Upgrades

On OpenShift, it is possible to use an imagestream tag for the applicationImage to provide seamless upgrades.

The imagestream must be in the same namespace that the deployed WildFlyServer resource. If that’s the case, the WildFly Operator will trigger new deployment of the application whenever the image corresponding to the imagestream tag changes.

This allows to take full advantage of the OpenShift ecosystem to build the image using BuildConfig in order to trigger new deployments when the code of the application changes (using WebHooks to trigger new builds and then new deployments) or when WildFly S2I images changes (which can also trigger new build).

Creation of an HTTP Route

By default, when the Operator runs on OpenShift, it creates an external route to the HTTP port of the Java application.

This route creation can be disabled by setting disableHTTPRoute to true if you do not wish to create an external route to the Java application.

Example to disable HTTP route
spec:
  disableHTTPRoute: true

Transaction recovery during scaledown

As the application deployed in the WildFly application server may use JTA transactions there and the question emerges: what does happen when the cluster is scaled down? When the number of active WildFly replicas is decreased, still there may be some in-doubt transactions in the transaction log. When the pod is removed then all the in-progress transactions are stopped and rolled back. A more troublesome situation occurs when XA transactions are used. When the XA transaction declares it’s prepared it’s a promise to finish the transaction successfully. But the transaction manager which made this promise is running inside the WildFly server. Then simply shutting down such pod may lead to data inconsistencies or data locks.

It must be ensured that all transactions are finished before the number of replicas is really decreased. For that purpose, the WildFly Operator provides scale down functionality which verifies if all transactions were finished and only then marks the pod as clean for termination.

Decreasing the replica size in the WildFlyServer customer resource is done at field WildFlyServer.Spec.Replicas (see Specify the Size of the Application). You can use for example patch command like

oc patch wildflyserver <name> -p '[{"op":"replace", "path":"/spec/replicas", "value":0}]' --type json

or you can manually edit and change the replica number with oc edit wildflyserver <name>.

Note
Decreasing replica size at the StatefulSet or deleting the Pod itself has no effect and as such changes will be reverted.
Warning
if you decide to delete whole WildflyServer definition (oc delete wildflyserver <deployment_name>) then no transaction recovery process is started and the pod is terminated regardless of unfinished transactions. If you want to remove the deployment in a safe way without data inconsistencies, you need first to scale down the number of pods to 0, wait until all pods are terminated and only after that you can delete the WildFlyServer instance
Warning
Narayana recovery listener has to be enabled in the WildFly transaction subsystem. Otherwise, scaledown transaction recovery processing is skipped for the particular WildFly pod. See the recovery-listener attribute of the transaction subsystem.

when the scaledown process begins the pod state (oc get pod <pod_name>`) is still marked as Running. The reason is that the pod needs to be able to finish all the unfinished transactions and which includes the remote EJB calls that target it. If you want to observe the state of the scaledown processing you need to observe the status of the WildFlyServer instance. When running oc describe wildflyserver <name> you can see the status of the Pods.

The WildFlyServer.Status.Pods[].State can be one of the following values:

Status.Pod.State Description

ACTIVE

The pod is active and processing requests.

SCALING_DOWN_RECOVERY_INVESTIGATION

The pod is under investigation to find out if there are transactions that did not complete their lifecycle successfully.

SCALING_DOWN_RECOVERY_PROCESSING

There are in-doubt transactions in the log store. The pod cannot be terminated until these transactions are either completed or cleaned.

SCALING_DOWN_RECOVERY_HEURISTICS

There are heuristic transactions in the log store. The pod cannot be terminated until these transactions are either completed or cleaned.

SCALING_DOWN_CLEAN

The pod was processed by transaction scaled down processing and is marked as clean to be removed from the cluster.

For pods in the SCALING_DOWN_RECOVERY_PROCESSING status, each recovery cycle is tracked using the RecoveryCounter (accessible via WildFlyServer.Status.Pods[].RecoveryCounter). This field monitors the operator’s recovery attempts until all in-doubt transactions are successfully completed.

You can observe the overall state of the active and no-active pods by looking at the WildFlyServer.Status.'Scalingdown Pods' and WildFlyServer.Status.Replicas fields. The 'Scalingdown Pods' defines the number of pods which are about to be terminated when they are clean of unfinished transactions. The Replicas defines the current number of running pods. The WildFlyServer.Spec.Replicas (see Specify the Size of the Application) defines the desired number of the active pods. If there are no pods in scaledown process the numbers of WildFlyServer.Status.Replicas and WildFlyServer.Spec.Replicas are equals.

Note

This feature is not supported by a Bootable JAR application image. The transaction recovery facility will be ignored for Bootable JAR application images.

Warning

It’s feasible to disable transaction recovery during scale-down by configuring the property WildFlyServerSpec.DeactivateTransactionRecovery to true (by default, it’s set to false). When DeactivateTransactionRecovery is enabled, in-doubt and heuristic transactions won’t be finalized or reported. Consequently, this could lead to potential data inconsistency or loss when distributed transactions are employed.

Transaction scaledown special cases

Heuristics transactions

As it’s well-known the transaction may finish either with commit or roll-back. Unfortunately there is a third outcome which is unknown. It’s a state when there is no way of automatic transaction recovery and human intervention is needed. If the transaction is in state of heuristics the pod is marked as SCALING_DOWN_RECOVERY_HEURISTICS and the administrator needs to manually connect with the jboss-cli to the particular WildFly instance and to resolve the heuristic transaction.

When all the formerly heuristics records are removed from the transaction object store then the operator marks the pod as SCALING_DOWN_CLEAN and the pod is terminated.

SCALING_DOWN_CLEAN state and StatefulSet behaviour

There is a special case coming from the design of the StatefulSet that ensures that the network hostname is stable (it does not change on the pod restart). The StatefulSet depends on ordering of the pods. The pod are named by the defined order. The StatefulSet then requires the pod-0 not being terminated before the pod-1. First pod-1 is terminated and then pod-0.

From that rule we can observe that if the pod-1 is in state SCALING_DOWN_RECOVERY_HEURISTICS (e.g. contains some heuristic transactions) then if pod-0 is in the state of SCALING_DOWN_CLEAN in will be lingering at that state until the pod-1 is terminated.

But even the pod is in state SCALING_DOWN_CLEAN the pod is not receiving any new requests so it’s practically idle.