The ejb-txn-remote-call quickstart demonstrates remote transactional EJB calls over two application servers of WildFly.
What is it?
The ejb-txn-remote-call quickstart demonstrates the remote transactional EJB calls over two WildFly Application Servers. The remote side forms a HA cluster.
Description
This quickstart demonstrates how EJB remote calls propagate JTA transaction across WildFly Application Servers. Further, this quickstart demonstrates the transaction recovery, which is run for both servers when a failure occurs.
This quickstart contains two Maven projects.
The first maven project represents the sender side, and is intended to be deployed on the first WildFly (server1).
The second project represents the receiver side. This project is intended to be deployed
to the other two WildFly (server2 and server3). The two projects must not be deployed to the same server.
| Project | Description |
|---|---|
|
The application deployed to the first WildFly server.
Users can interact with this application through some REST endpoints, which start remote EJB calls toward the |
|
The application deployed to the second and third WildFly servers.
This application receives the remote EJB calls from the |
Running the Quickstart
This quickstart demonstrates its functionalities on bare metal, using WildFly Maven plugin, and on OpenShift.
System Requirements
The application this project produces is designed to be run on WildFly Application Server 38 or later.
All you need to build this project is Java SE 17.0 or later, and Maven 3.6.0 or later. See Configure Maven to Build and Deploy the Quickstarts to make sure you are configured correctly for testing the quickstarts.
The Goal
The EJB remote call propagates transaction from client application
to server application. The remote call hits one of the two servers where the server application is deployed.
Running in a bare metal environment
First of all, an environment variable should be defined to point to this quickstart’s root folder. From the root of this quickstart, the following command should be executed:
export PATH_TO_QUICKSTART_DIR=$(pwd)
Second, another environment variable should be defined to point to the WildFly installation directory:
export WILDFLY_HOME=...
Then, three WildFly servers needs to be configured:
-
The
clientapplication gets deployed to the first server (server1) -
The
serverapplication gets deployed to the other two WildFly servers (server2, andserver3, which are configured as a cluster)
Setup WildFly servers
The easiest way to start multiple instances of WildFly on a local computer is to copy the WildFly installation directory to three separate directories.
The installation directories are named:
-
WILDFLY_HOME_1forserver1 -
WILDFLY_HOME_2forserver2 -
WILDFLY_HOME_3forserver3
Given that the installation directory of the WildFly is identified with $WILDFLY_HOME:
cp -r $WILDFLY_HOME server1; \
WILDFLY_HOME_1="$PWD/server1"
cp -r $WILDFLY_HOME server2; \
WILDFLY_HOME_2="$PWD/server2"
cp -r $WILDFLY_HOME server3; \
WILDFLY_HOME_3="$PWD/server3"
Creating a user for server2 and server3
To successfully process EJB remote calls from server1 to either server2
or to server3 a user to authenticate the EJB remote calls must be created on the receiving servers.
Type the following command to add the user to server2:
$WILDFLY_HOME_2/bin/add-user.sh -a -u 'quickstartUser' -p 'quickstartPwd1!'
And type the following command to add the user to server3:
$WILDFLY_HOME_3/bin/add-user.sh -a -u 'quickstartUser' -p 'quickstartPwd1!'
|
Note
|
For Windows, use the WILDFLY_HOME\bin\add-user.bat script.
|
Configure datasources
As this quickstart performs transactional work against a database, it is needed to create a new database. For the purpose of this quickstart, a simple PostgreSQL container will be used, please open another terminal and run the following command to download and start its image:
podman run -p 5432:5432 --rm -ePOSTGRES_DB=test -ePOSTGRES_USER=test -ePOSTGRES_PASSWORD=test postgres:9.4 -c max-prepared-transactions=110 -c log-statement=all
The WildFly servers need to be configured to be able to connect to the database. First of all, a JDBC driver needs to be installed as jboss module.
The following command downloads the PostgreSQL driver automatically through Maven:
cd ${PATH_TO_QUICKSTART_DIR};
mvn clean process-sources
Then, the PostgreSQL driver needs to be loaded as jboss module in all WildFly servers:
cd $WILDFLY_HOME_1; \
./bin/jboss-cli.sh "embed-server,\
module add --name=org.postgresql.jdbc \
--resources=${PATH_TO_QUICKSTART_DIR}/client/target/postgresql/postgresql.jar"
cd $WILDFLY_HOME_2; \
./bin/jboss-cli.sh "embed-server,\
module add --name=org.postgresql.jdbc \
--resources=${PATH_TO_QUICKSTART_DIR}/client/target/postgresql/postgresql.jar"
cd $WILDFLY_HOME_3; \
./bin/jboss-cli.sh "embed-server,\
module add --name=org.postgresql.jdbc \
--resources=${PATH_TO_QUICKSTART_DIR}/client/target/postgresql/postgresql.jar"
Moreover, the PostgreSQL driver needs to be installed on all WildFly servers.
For server1, the configuration file standalone.xml will be used.
For server2 and server3 the configuration file standalone-ha.xml will be used.
cd $WILDFLY_HOME_1; \
./bin/jboss-cli.sh "embed-server --server-config=standalone.xml,\
/subsystem=datasources/jdbc-driver=postgresql:add(driver-name=postgresql,driver-module-name=org.postgresql.jdbc,driver-xa-datasource-class-name=org.postgresql.xa.PGXADataSource)"
cd $WILDFLY_HOME_2; \
./bin/jboss-cli.sh "embed-server --server-config=standalone-ha.xml,\
/subsystem=datasources/jdbc-driver=postgresql:add(driver-name=postgresql,driver-module-name=org.postgresql.jdbc,driver-xa-datasource-class-name=org.postgresql.xa.PGXADataSource)"
cd $WILDFLY_HOME_3; \
./bin/jboss-cli.sh "embed-server --server-config=standalone-ha.xml,\
/subsystem=datasources/jdbc-driver=postgresql:add(driver-name=postgresql,driver-module-name=org.postgresql.jdbc,driver-xa-datasource-class-name=org.postgresql.xa.PGXADataSource)"
Finally, it is time to run the scripts for adding the PostgreSQL datasource to the WildFly servers:
cd $WILDFLY_HOME_1; \
./bin/jboss-cli.sh -DpostgresqlUsername="test" -DpostgresqlPassword="test" \
--file=${PATH_TO_QUICKSTART_DIR}/client/scripts/postgresql-datasource.cli \
--properties=${PATH_TO_QUICKSTART_DIR}/client/scripts/cli.local.properties
cd $WILDFLY_HOME_2; \
./bin/jboss-cli.sh -DpostgresqlUsername="test" -DpostgresqlPassword="test" \
--file=${PATH_TO_QUICKSTART_DIR}/server/scripts/postgresql-datasource.cli \
--properties=${PATH_TO_QUICKSTART_DIR}/server/scripts/cli.local.properties
cd $WILDFLY_HOME_3; \
./bin/jboss-cli.sh -DpostgresqlUsername="test" -DpostgresqlPassword="test" \
--file=${PATH_TO_QUICKSTART_DIR}/server/scripts/postgresql-datasource.cli \
--properties=${PATH_TO_QUICKSTART_DIR}/server/scripts/cli.local.properties
Configuring EJB remoting on server1
EJB remote calls from server1 to either server2 or server3 need to be authenticated. To achieve
this configuration, the script ${PATH_TO_QUICKSTART_DIR}/client/scripts/remoting-configuration.cli
will be executed on server1.
|
Note
|
|
cd $WILDFLY_HOME_1; \
./bin/jboss-cli.sh -DremoteServerUsername='quickstartUser' -DremoteServerPassword='quickstartPwd1!' \
--file=${PATH_TO_QUICKSTART_DIR}/client/scripts/remoting-configuration.cli \
--properties=${PATH_TO_QUICKSTART_DIR}/client/scripts/cli.local.properties
|
Note
|
For Windows, use the bin\jboss-cli.bat script.
|
Running remoting-configuration.cli results in the creation of:
-
A
remote outbound socketthat points to the port onserver2/server3where EJB remoting endpoints can be reached -
A
remote outbound connectionthat can be referenced in the war deployment withjboss-ejb-client.xmldescriptor (see${PATH_TO_QUICKSTART_DIR}/client/src/main/webapp/WEB-INF/jboss-ejb-client.xml). -
An authentication context
auth_contextthat is used by the new created remoting connectionremote-ejb-connection; the authentication context uses the same username and password created forserver2andserver3
Start WildFly servers
At this point, the configuration of the WildFly servers is completed.
server1 must be started with the standalone.xml configuration,
while server2 and server3 must be started with the standalone-ha.xml configuration to create a cluster.
As all WildFly servers will be run in the same bare metal environment,
a port offset will be applied to server2 and server3. Moreover,
each server has to define a unique transaction node identifier and jboss node name.
Start each server in a separate terminal.
cd $WILDFLY_HOME_1; \
./bin/standalone.sh -c standalone.xml -Djboss.tx.node.id=server1 -Djboss.node.name=server1
cd $WILDFLY_HOME_2; \
./bin/standalone.sh -c standalone-ha.xml -Djboss.tx.node.id=server2 -Djboss.node.name=server2 -Djboss.socket.binding.port-offset=100
cd $WILDFLY_HOME_3; \
./bin/standalone.sh -c standalone-ha.xml -Djboss.tx.node.id=server3 -Djboss.node.name=server3 -Djboss.socket.binding.port-offset=200
|
Note
|
For Windows, use the bin\standalone.bat script.
|
Deploying the Quickstart applications
-
With all WildFly servers configured and running, the
clientandserverapplication can be deployed -
Ensure the whole project is built:
cd ${PATH_TO_QUICKSTART_DIR}/ mvn clean package -
Then, the
clientapplication can be deployed using the following commands:cd ${PATH_TO_QUICKSTART_DIR}/client mvn wildfly:deploy -
Lastly, the
serverapplication can be deployed using the following commands:cd ${PATH_TO_QUICKSTART_DIR}/server mvn wildfly:deploy -Dwildfly.port=10090 mvn wildfly:deploy -Dwildfly.port=10190
The commands just run employed the WildFly Maven plugin to connect to the running instances of WildFly
and deploy the war archives to the servers.
The following warnings might appear in the server output after the applications are deployed. These warnings can be safely ignored in a development environment.
WARN [org.jgroups.protocols.UDP] (ServerService Thread Pool -- 90) JGRP000015: the receive buffer of socket ManagedMulticastSocketBinding was set to 20MB, but the OS only allocated 6.71MB
WARN [org.jgroups.protocols.UDP] (ServerService Thread Pool -- 90) JGRP000015: the receive buffer of socket ManagedMulticastSocketBinding was set to 25MB, but the OS only allocated 6.71MB
Checkpoints
-
If errors occur, verify that the WildFly servers are running and that they are configured properly
-
Verify that all deployments are published into all three servers
-
On
server1check the log to confirm that theclient/target/client.wararchive is deployed... INFO [org.wildfly.extension.undertow] (ServerService Thread Pool -- 76) WFLYUT0021: Registered web context: '/client' for server 'default-server' INFO [org.jboss.as.server] (management-handler-thread - 2) WFLYSRV0010: Deployed "client.war" (runtime-name : "client.war") -
On
server2andserver3, check the log to confirm that theserver/target/server.wararchive is deployed.... INFO [org.wildfly.extension.undertow] (ServerService Thread Pool -- 86) WFLYUT0021: Registered web context: '/server' for server 'default-server' INFO [org.jboss.as.server] (management-handler-thread - 1) WFLYSRV0010: Deployed "server.war" (runtime-name : "server.war")
-
-
Verify that
server2andserver3formed a HA cluster.-
Check the server log of either
server2andserver3, or both.[org.infinispan.CLUSTER] () ISPN000094: Received new cluster view for channel ejb: [server2|1] (2) [server2, server3] [org.infinispan.CLUSTER] () ISPN100000: Node server3 joined the cluster ... INFO [org.infinispan.CLUSTER] () [Context=server.war/infinispan] ISPN100010: Finished rebalance with members [server2, server3], topology id 5
-
Examining the Quickstart
Once the WildFly servers are configured and started, and the quickstart artifacts are deployed, it is possible to
invoke the endpoints of server1, which generate EJB remote invocations against the HA cluster formed by server2 and server3.
The following table defines the available endpoints, and their expected behaviour.
|
Note
|
The endpoints return data in JSON format. You can use
|
|
Note
|
On Windows, |
The HTTP invocations return the hostnames of the contacted servers.
| URL | Behaviour | Expectation |
|---|---|---|
Two invocations under the transaction context started on |
The two returned hostnames must be the same. |
|
Several remote invocations to stateless EJB without a transaction context.
The EJB remote call is configured from the |
The list of the returned hostnames should contain occurrences of both
|
|
Two invocations under the transaction context started on |
The returned hostnames must be the same. |
|
Two invocations under the transaction context started on |
The returned hostnames must be the same. |
|
Two invocations under the transaction context started on |
The returned hostnames must be the same. |
|
An invocation under the transaction context started on |
When the recovery manager finishes the work all the transaction resources are committed. |
Observing the recovery processing after client/remote-outbound-fail-stateless call
The EJB call to the endpoint client/remote-outbound-fail-stateless simulates the presence
of an intermittent network error happening at the commit phase of the two-phase commit protocol (2PC).
The transaction recovery manager
periodically tries to recover the unfinished work and only when this attempt is successful,
the transaction is completed (which makes the update in the database visible). It is possible to confirm the completion of
the transaction by invoking the REST endpoint server/commits at both servers server2 and server3.
curl -s http://localhost:8180/server/commits
curl -s http://localhost:8280/server/commits
The response of server/commits is a tuple composed by the host’s info and the number of commits.
For example the output could be ["host: mydev.narayana.io/192.168.0.1, jboss node name: server2","3"]
and it says that the hostname is mydev.narayana.io, the jboss node name is server2,
and the number of commits is 3.
The Transaction recovery manager runs periodically (by default, it runs every 2 minutes) on all servers.
Nevertheless, as the transaction is initiated on server1, the recovery manager on this server will be
responsible to initiate the recovery process.
|
Note
|
The recovery process can be started manually. Using
|
Steps to observe that the recovery processing was done
-
Before invoking the remote-outbound-fail-stateless endpoint, double check the number of
commitsonserver2andserver3by invoking theserver/commitsendpoints.curl http://localhost:8180/server/commits; echo # output example: # ["host: mydev.narayana.io/192.168.0.1, jboss node name: server2","1"] curl http://localhost:8280/server/commits; echo # output example: # ["host: mydev.narayana.io/192.168.0.1, jboss node name: server3","2"] -
Invoke the REST endpoint
client/remote-outbound-fail-statelesscurl http://localhost:8080/client/remote-outbound-fail-stateless | jq .The JSON output from the previous command reports the name of server the request was sent to.
-
At the server reported by the previous command, verify the number of
commitsby invoking theserver/commitsendpoint. -
Check the log of
server1for the following warning messageARJUNA016036: commit on < formatId=131077, gtrid_length=35, bqual_length=36, tx_uid=..., node_name=server1, branch_uid=..., subordinatenodename=null, eis_name=unknown eis name > (Subordinate XAResource at remote+http://localhost:8180) failed with exception $XAException.XA_RETRY: javax.transaction.xa.XAException: WFTXN0029: The peer threw an XA exceptionThis message means that the transaction manager was not able to commit the transaction as an error occurred while committing the transaction on the remote server. The
XAException.XA_RETRYexception, meaning an intermittent failure, was reported in the logs. -
The logs on
server2orserver3contain a warning about theXAResourcefailure as well.ARJUNA016036: commit on < formatId=131077, gtrid_length=35, bqual_length=43, tx_uid=..., node_name=server1, branch_uid=..., subordinatenodename=server2, eis_name=unknown eis name > (org.jboss.as.quickstarts.ejb.mock.MockXAResource@731ae22) failed with exception $XAException.XAER_RMFAIL: javax.transaction.xa.XAException -
Wait for the recovery process at
server1to recover the unfinished transaction (or force a recovery cycle manually) -
The number of commits on the targeted server should be incremented by one.
Run the Integration Tests
This quickstart includes integration tests, which are located under the src/test/ directory. The integration tests verify that the quickstart runs correctly when deployed on the server.
Follow these steps to run the integration tests.
-
Make sure WildFly server is started.
-
Make sure the quickstart is deployed.
-
Type the following command to run the
verifygoal with theintegration-testingprofile activated.$ mvn verify -Pintegration-testing
Undeploy the Quickstart
When you are finished testing the quickstart, execute these commands to undeploy the archives:
cd ${PATH_TO_QUICKSTART_DIR}/client
mvn wildfly:undeploy
cd ${PATH_TO_QUICKSTART_DIR}/server
mvn wildfly:undeploy -Dwildfly.port=10090
mvn wildfly:undeploy -Dwildfly.port=10190
Server Log: Expected Warnings and Errors
This quickstart is not production grade. The server logs include the following warnings during the startup. It is safe to ignore these warnings.
WFLYDM0111: Keystore standalone/configuration/application.keystore not found, it will be auto generated on first use with a self signed certificate for host localhost
WFLYELY01084: KeyStore .../standalone/configuration/application.keystore not found, it will be auto generated on first use with a self-signed certificate for host localhost
WFLYSRV0018: Deployment "deployment.server.war" is using a private module ("org.jboss.jts") which may be changed or removed in future versions without notice.
Building and running the quickstart application with provisioned WildFly server
Instead of using a standard WildFly server distribution, the three WildFly servers to deploy and run the quickstart can be provisioned:
cd ${PATH_TO_QUICKSTART_DIR}/client;
mvn clean package \
-DremoteServerUsername="quickstartUser" -DremoteServerPassword='quickstartPwd1!' \
-DpostgresqlUsername="test" -DpostgresqlPassword="test"
cd ${PATH_TO_QUICKSTART_DIR}/server;
mvn clean package \
-Dwildfly.provisioning.dir=server2 -Djboss-as.home=target/server2 \
-DpostgresqlUsername="test" -DpostgresqlPassword="test";
mvn package \
-Dwildfly.provisioning.dir=server3 -Djboss-as.home=target/server3 \
-DpostgresqlUsername="test" -DpostgresqlPassword="test"
The provisioned WildFly servers, with the quickstart deployed, can then be found in ${PATH_TO_QUICKSTART_DIR}/client/target/server, ${PATH_TO_QUICKSTART_DIR}/server/target/server2, and ${PATH_TO_QUICKSTART_DIR}/server/target/server3 directories, and their usage is similar to a standard server distribution, with the simplification that there is never the need to specify the server configuration to be started.
The server provisioning functionality is provided by the WildFly Maven Plugin, and you may find its configuration in the pom.xml files of the quickstart.
The quickstart user should be added after provisioning the servers, and before running them:
cd ${PATH_TO_QUICKSTART_DIR}/server;
./target/server2/bin/add-user.sh -a -u 'quickstartUser' -p 'quickstartPwd1!';
./target/server3/bin/add-user.sh -a -u 'quickstartUser' -p 'quickstartPwd1!'
|
Note
|
For Windows, use the |
Run the Integration Tests with a provisioned server
The integration tests included with this quickstart, which verify that the quickstart runs correctly, may also be run with provisioned server.
Follow these steps to run the integration tests.
-
Ensure the PostgreSQL database is running, as described in Configure datasources.
-
Make sure the servers are provisioned by running the commands reported in Building and running the quickstart application with provisioned WildFly server
-
Add the quickstart user to the provisioned
server2andserver3by running the commands reported in Building and running the quickstart application with provisioned WildFly server -
Start the WildFly provisioned servers in three distinct terminals, this time using the WildFly Maven Plugin, which is recommended for testing due to simpler automation.
cd ${PATH_TO_QUICKSTART_DIR}/client; mvn wildfly:start \ -DpostgresqlUsername="test" -DpostgresqlPassword="test" \ -Dwildfly.javaOpts="-Djboss.tx.node.id=server1 -Djboss.node.name=server1"cd ${PATH_TO_QUICKSTART_DIR}/server; mvn wildfly:start -Djboss-as.home=target/server2 \ -DpostgresqlUsername="test" -DpostgresqlPassword="test" \ -Dwildfly.javaOpts="-Djboss.socket.binding.port-offset=100 -Djboss.tx.node.id=server2 -Djboss.node.name=server2"cd ${PATH_TO_QUICKSTART_DIR}/server; mvn wildfly:start -Djboss-as.home=target/server3 \ -DpostgresqlUsername="test" -DpostgresqlPassword="test" \ -Dwildfly.javaOpts="-Djboss.socket.binding.port-offset=200 -Djboss.tx.node.id=server3 -Djboss.node.name=server3" -
Type the following command to run the
verifygoal with theintegration-testingprofile activated, and specifying the quickstart’s URL using theserver.hostsystem property.cd ${PATH_TO_QUICKSTART_DIR}/client; mvn verify -Pintegration-testingcd ${PATH_TO_QUICKSTART_DIR}/server; mvn verify -Pintegration-testing -Dserver.host="http://localhost:8180"cd ${PATH_TO_QUICKSTART_DIR}/server; mvn verify -Pintegration-testing -Dserver.host="http://localhost:8280" -
To shut down the WildFly provisioned servers using the WildFly Maven Plugin:
mvn wildfly:shutdownmvn wildfly:shutdown -Dwildfly.port=10090mvn wildfly:shutdown -Dwildfly.port=10190
Running on OpenShift
The ephemeral nature of OpenShift does not work smoothly with WildFly’s ability to handle transactions. In fact, WildFly’s transaction management saves logs to keep record of transactions' history in case of extreme scenarios, like crashes or network issues. Moreover, EJB remoting requires a stable remote endpoint to guarantee:
-
The transaction affinity of
stateful beansand -
The recovery of transactions.
To fulfil the aforementioned requirements, applications that requires ACID transactions must be deployed to WildFly using the WildFly’s Operator, which can employ OpenShift’s StatefulSet. Failing to do so might result in no-ACID transactions.
Prerequisites
Install WildFly’s Operator
To install WildFly’s Operator, follow the official documentation (which instructions are also reported here for convenience)
cd /tmp
git clone https://github.com/wildfly/wildfly-operator.git
cd wildfly-operator
oc adm policy add-cluster-role-to-user cluster-admin developer
make install
make deploy
To verify that the WildFly Operator is running, execute the following command:
oc get po -n $(oc project -q)
NAME READY STATUS RESTARTS AGE
wildfly-operator-5d4b7cc868-zfxcv 1/1 Running 1 22h
Start a PostgreSQL database
This quickstart requires a PostgreSQL database to run correctly. In the scope of this quickstart, a PostgreSQL database will be deployed on the OpenShift instance using the Helm chart provided by bitnami:
helm repo add bitnami https://charts.bitnami.com/bitnami
helm install postgresql bitnami/postgresql -f charts/postgresql.yaml --wait --timeout="5m"
Build the applications
To build the client and the server applications, this quickstart employs
WildFly’s Helm charts.
For more information about WildFly’s Helm chart, please refer to the official
documentation.
helm repo add wildfly http://docs.wildfly.org/wildfly-charts/
helm install client -f charts/client.yaml wildfly/wildfly
helm install server -f charts/server.yaml wildfly/wildfly
Wait for the builds to finish. Their status can be verified by executing the oc get pod command.
Deploy the Quickstart
To deploy the client and the server applications, this quickstart uses the WildFlyServer custom resource,
thanks to which the WildFly Operator is able to create a WildFly pod and
deploy an application.
|
Note
|
Make sure that view permissions are granted to the default system account.
The KUBE_PING protocol, which is used for forming the HA WildFly cluster
on OpenShift, requires view permissions to read the labels of the pods:
oc policy add-role-to-user view system:serviceaccount:$(oc project -q):default -n $(oc project -q)
|
cd ${PATH_TO_QUICKSTART_DIR};
oc create -f client/client-cr.yaml;
oc create -f server/server-cr.yaml
If the above commands are successful, the oc get pod command shows
all the pods required for the quickstart, i.e. the client pod and two
server pods (and the PostgreSQL database).
NAME READY STATUS RESTARTS AGE
client-0 1/1 Running 0 29m
postgresql-f9f475f87-l944r 1/1 Running 1 22h
server-0 1/1 Running 0 11m
server-1 1/1 Running 0 11m
Verify the Quickstarts
The WildFly Operator creates routes that make the client and the server applications accessible
outside the OpenShift environment. The oc get route command shows the addresses of the HTTP endpoints.
An example of the output is:
oc get route
NAME HOST/PORT PATH SERVICES PORT
client-route client-route-ejb-txn-remote-call-client-artifacts.apps-crc.testing client-loadbalancer http
server-route server-route-ejb-txn-remote-call-client-artifacts.apps-crc.testing server-loadbalancer http
With the following commands, it is possible to verify the some functionalities of this quickstart:
curl -s $(oc get route client-route --template='{{ .spec.host }}')/client/remote-outbound-stateless | jq .
curl -s $(oc get route client-route --template='{{ .spec.host }}')/client/remote-outbound-stateless | jq .
curl -s $(oc get route client-route --template='{{ .spec.host }}')/client/remote-outbound-notx-stateless | jq .
curl -s $(oc get route client-route --template='{{ .spec.host }}')/client/direct-stateless | jq .
curl -s $(oc get route client-route --template='{{ .spec.host }}')/client/remote-outbound-notx-stateful | jq .
For other HTTP endpoints, refer to the table above.
If you like to observe the recovery process then you can follow these shell commands.
# To check failure resolution
# verify the number of commits that come from the first and second node of the `server` deployments.
# Two calls are needed, as each reports the commit count of different node.
# Remember the reported number of commits to be compared with the results after crash later.
curl -s $(oc get route server-route --template='{{ .spec.host }}')/server/commits
curl -s $(oc get route server-route --template='{{ .spec.host }}')/server/commits
# Run the remote call that causes the JVM of the server to crash.
curl -s $(oc get route client-route --template='{{ .spec.host }}')/client/remote-outbound-fail-stateless
# The platforms restarts the server back to life.
# The following commands then make us waiting while printing the number of commits happened at the servers.
while true; do
curl -s $(oc get route server-route --template='{{ .spec.host }}')/server/commits
curl -s $(oc get route server-route --template='{{ .spec.host }}')/server/commits
I=$((I+1))
echo " <<< Round: $I >>>"
sleep 2
done
Running on OpenShift: Quickstart application removal
To delete the client and the server applications, the WildFlyServer definitions needs to be deleted.
This is achievable running:
oc delete WildFlyServer client;
oc delete WildFlyServer server
The client and the server applications will be stopped, and the two pods will be removed.
To remove the Helm charts installed previously:
helm uninstall client;
helm uninstall server;
helm uninstall postgresql
Finally, to undeploy and uninstall the WildFly’s operator:
cd /tmp/wildfly-operator;
make undeploy