1. WildFly Testsuite Overview
This document will detail the implementation of the testsuite integration submodule as it guides you on adding your own test cases.
The WildFly integration test suite has been designed with the following goals:
-
support execution of all identified test case use cases
-
employ a design/organization which is scalable and maintainable
-
provide support for the automated measurement of test suite quality (generation of feature coverage reports, code coverage reports)
In addition, these requirements were considered:
-
identifying distinct test case runs of the same test case with a different set of client side parameters and server side parameters
-
separately maintaining server side execution results (e.g. logs, the original server configuration) for post-execution debugging
-
running the testsuite in conjunction with a debugger
-
the execution of a single test (for debugging purposes)
-
running test cases against different container modes (managed in the main, but also remote and embedded)
-
configuring client and server JVMs separately (e.g., IPv6 testing)
1.1. Test Suite Organization
The testsuite module has a few submodules:
-
benchmark - holds all benchmark tests intended to assess relative performance of specific feature
-
domain - holds all domain management tests
-
integration - holds all integration tests
-
stress - holds all stress tests
It is expected that test contributions fit into one of these categories.
The pom.xml file located in the testsuite module is inherited by all
submodules and is used to do the following:
-
set defaults for common testsuite system properties (which can then be overridden on the command line)
-
define dependencies common to all tests (Arquillian, junit or testng, and container type)
-
provide a workaround for
@Resource(lookup=…)which requires libraries in jbossas/endorsed
It should not:
-
define module-specific server configuration build steps
-
define module-specific surefire executions
These elements should be defined in logical profiles associated with each logical grouping of tests; e.g., in the pom for the module which contains the tests. The submodule poms contain additional details of their function and purpose as well as expanded information as shown in this document.
1.2. Profiles
You should not activate the abovementioned profiles by -P, because that disables other profiles which are activated by default.
Instead, you should always use activating properties, which are in parentheses in the lists below.
Testsuite profiles are used to group tests into logical groups.
-
all-modules.module.profile (all-modules)
-
integration.module.profile (integration.module)
-
compat.module.profile (compat.module)
-
domain.module.profile (domain.module)
-
benchmark.module.profile (benchmark.module)
-
stress.module.profile (stress.module)
They also prepare WildFly instances and resources for respective testsuite submodules.
-
jpda.profile - sets surefire.jpda.args (debug)
-
ds.profile - sets database properties and prepares the datasource (ds=<db id>)
-
Has related database-specific profiles, like mysql51.profile etc.
-
Integration testsuite profiles configure surefire executions.
-
smoke.integration.tests.profile
-
basic.integration.tests.profile
-
clustering.integration.tests.profile
1.3. Integration tests
1.3.1. Smoke -Dts.smoke
Contains smoke tests.
Runs by default; use -Dts.noSmoke to prevent running.
Tests should execute quickly.
Divided into two Surefire executions:
-
One with full platform
-
Second with web profile (majority of tests).
1.3.2. Basic -Dts.basic
Basic integration tests - those which do not need a special configuration like cluster.
Divided into three Surefire executions:
-
One with full platform,
-
Second with web profile (majority of tests).
-
Third with web profile, but needs to be run after server restart to check whether persistent data are really persisted.
1.3.3. Clustering -Dts.clustering
Contains all tests relating to clustering aspects of the application server, such as:
-
web session clustering,
-
Jakarta Enterprise Beans session clustering,
-
command dispatcher,
-
web session affinity handling,
-
and other areas.
Tests should leverage shared testing logic by extending org.jboss.as.test.clustering.cluster.AbstractClusteringTestCase.
The test case contract is that before executing the test method, all specified servers are started and all specified
deployments are deployed. This allows Arquillian resource injection into the test case.
There are four WildFly server instances, one load-balancer (Undertow) and one datagrid (Infinispan server) available for the tests.
Maven profiles and Parallelization
There are maven profiles that might come in handy for testing:
-
ts.clustering.common.profileprepares server configurations used by test execution profiles -
ts.clustering.cluster.ha.profileruns tests againststandalone-ha.xmlprofile -
ts.clustering.cluster.fullha.profileruns tests which requirestandalone-full-ha.xmlprofile; e.g. tests requiring JMS subsystem -
ts.clustering.cluster.ha-infinispan-server.profileruns tests againststandalone-ha.xmlprofile with Infinispan Server provisioned via@ClassRule -
ts.clustering.single.profileruns clustering tests that are using a non-HA server profile -
ts.clustering.single.testable.profileruns clustering tests that usetestable=truedeployments -
ts.clustering.byteman.profileruns clustering tests that require installation of byteman rules
For instance, to only run tests that run against full-ha profile, activate clustering tests with -Dts.clustering and exclude
the other profiles with -P:
mvn -f testsuite/integration/clustering/pom.xml clean install -P"-ts.clustering.cluster.ha.profile,-ts.clustering.cluster.fullha.profile,-ts.clustering.cluster.ha-infinispan-server.profile,-ts.clustering.single.profile,-ts.clustering.byteman.profile"
If the testsuite can be run on multiple runners in parallel, the main execution (which takes the majority of the execution time)
can be split by packages using -Dts.surefire.clustering.ha.additionalExcludes property.
This property feeds a regular expression to exclude sub-packages of the org.jboss.as.test.clustering.cluster package.
The sub-packages at the time of writing are affinity, cdi, dispatcher, ejb, ejb2, group, jms, jpa,
jsf, provider, registry, singleton, sso, web, and xsite.
For instance, to parallelize testsuite execution on two machines (e.g. when using GitHub actions scripting or alike) the following commands
could be used to split the clustering tests into two executions of similar execution time, the first node can run the first half of the tests in sub-packages, e.g.:
./integration-tests.sh clean install -Dts.noSmoke -Dts.clustering -P="-ts.clustering.cluster.fullha.profile,-ts.clustering.cluster.ha-infinispan-server.profile,-ts.clustering.byteman.profile,-ts.clustering.single.profile" -Dts.surefire.clustering.ha.additionalExcludes=affinity\|cdi\|dispatcher\|ejb\|ejb2\|group\|jms\|jpa
while another node can concurrently run all the other profiles and the other half of sub-packages:
./integration-tests.sh clean install -Dts.noSmoke -Dts.clustering -Dts.surefire.clustering.ha.additionalExcludes=jsf\|provider\|registry\|singleton\|sso\|web\|xsite
If the test packages get out of sync with the excludes this will result in a test running multiple times, rather than tests being omitted.
Running a single test
To run a single test, specifying -Dtest=foo is the standard way to do this. However, this overrides the includes/excludes
section of the surefire maven plugin execution. So, in case of the clustering testsuite, the profile which this test belongs
to, needs to be specified as well. For instance, to run a single test from the 'single' test execution, exclude the other
test profiles:
./integration-tests.sh clean install -Dts.noSmoke -Dts.clustering -P="-ts.clustering.cluster.ha.profile,-ts.clustering.cluster.fullha.profile,-ts.clustering.cluster.ha-infinispan-server.profile,-ts.clustering.byteman.profile,-ts.clustering.single.profile" -Dtest=org.jboss.as.test.clustering.single.dispatcher.CommandDispatcherTestCase
1.3.4. Running Infinispan Server tests against custom distribution
To run the Infinispan Server-based tests against a custom distribution, a custom location can be specified with -Dinfinispan.server.home.override=/foo/bar
and -Dinfinispan.server.profile.override=infinispan-13.0.xml to use a corresponding server profile.
The distribution is then copied over to the build directories and patched with user credentials.
./integration-tests.sh clean install -Dts.noSmoke -Dts.clustering -P="-ts.clustering.cluster.ha.profile,-ts.clustering.cluster.fullha.profile,-ts.clustering.single.profile,-ts.clustering.byteman.profile,-ts.clustering.single.profile" -Dinfinispan.server.home.override=/Users/rhusar/Downloads/redhat-datagrid-8.3.0-server
Should it be required, the Infinispan Server driver version can be also overridden by -Dversion.org.infinispan.server.driver=13.0.0.Dev03.
2. WildFly Integration Testsuite User Guide
See also: WildFly Testsuite Test Developer Guide
Target Audience: Those interested in running the testsuite or a subset thereof, with various configuration options.
2.1. Running the testsuite
The tests can be run using:
-
build.shorbuild.bat, as a part of WildFly build.-
By default, only smoke tests are run. To run all tests, run build.sh install -DallTests.
-
-
integration-tests.shorintegration-tests.bat, a convenience script which uses bundled Maven (currently 3.0.3), and runs all parent testsuite modules (which configure the AS server). -
pure maven run, using
mvn install.
The scripts are wrappers around Maven-based build. Their arguments are passed to Maven (with few exceptions described below). This means you can use:
-
build.sh (defaults to install)
-
build.sh install
-
build.sh clean install
-
integration-tests.sh install
-
…etc.
2.1.1. Supported Maven phases
Testsuite actions are bounds to various Maven phases up to verify.
Running the build with earlier phases may fail in the submodules due to
missed configuration steps. Therefore, the only Maven phases you may
safely run, are:
-
clean
-
install
-
site
2.1.2. Testsuite structure
testsuite
integration
smoke
basic
clust
iiop
multinode
xts
compat
domain
mixed-domain
stress
benchmark
2.1.3. Test groups
To define groups of tests to be run, these properties are available:
-
-DallTests- Runs all subgroups. -
-DallInteg- Runs all integration tests. Same ascd testsuite/integration; mvn clean install -DallTests -
-Dts.integ- Basic integration + clustering tests. -
-Dts.clustering- Clustering tests. -
-Dts.iiop- IIOP tests. -
`-Dts.multinode `- Tests with many nodes.
-
-Dts.manualmode- Tests with manual mode Arquillian containers. -
-Dts.bench- Benchmark tests. -
-Dts.stress- Stress tests. -
-Dts.domain- Domain mode tests. -
-Dts.compat- Compatibility tests.
2.2. Examples
-
integration-tests.sh [install] ` `-- Runs smoke tests. -
integration-tests.sh clean install — Cleans the target directory, then runs smoke tests. -
integration-tests.sh install -Dts.smoke ` `-- Same as above. -
integration-tests.sh install -DallTests ` `-- Runs all testsuite tests. -
integration-tests.sh install -Dts.stress — Runs smoke tests and stress tests. -
integration-tests.sh install -Dts.stress -Dts.noSmoke — Runs stress tests only.
Pure maven - if you prefer not to use scripts, you may achieve the same result with:
-
mvn … -rf testsuite
The -rf … parameter stands for "resume from" and causes Maven to run
the specified module and all successive.
It’s possible to run only a single module (provided the ancestor modules were already run to create the AS copies) :
-
mvn … -pl testsuite/integration/cluster
The -pl … parameter stands for "project list" and causes Maven to
run the specified module only.
2.2.2. Other options
-DnoWebProfile - Run all tests with the full profile (
standalone-full.xml). By default, most tests are run under web
profile ( standalone.xml).
-Dts.skipTests - Skip testsuite’s tests. Defaults to the value of
-DskipTests, which defaults to false. To build AS, skip unit tests
and run testsuite, use -DskipTests -Dts.skipTests=false.
2.2.3. Timeouts
Surefire execution timeout
Unfortunatelly, no math can be done in Maven, so instead of applying a timeout ratio, you need to specify timeout manually for Surefire.
-Dsurefire.forked.process.timeout=900
In-test timeout ratios
Ratio in prercent - 100 = default, 200 = two times longer timeouts for given category.
Currently we have five different ratios. Later, it could be replaced with just one generic, one for database and one for deployment operations.
-Dtimeout.ratio.fsio=100
-Dtimeout.ratio.netio=100
-Dtimeout.ratio.memio=100
-Dtimeout.ratio.proc=100
-Dtimeout.ratio.db=100
2.2.4. Running a single test (or specified tests)
Single test is run using -Dtest=… . Examples:
-
./integration-tests.sh install-Dtest='*Clustered*'-Dintegration.module-Dts.clustering -
./integration-tests.sh clean install -Dtest=org/jboss/as/test/integration/ejb/async/*TestCase.java-Dintegration.module-Dts.basic -
cd testsuite; mvn install-Dtest='*Clustered*'-Dts.basic# No need for -Dintegration.module - integration module is active by default.
The same shortcuts listed in "Test groups" may be used to activate the module and group profile.
Note that -Dtest= overrides <includes> and <exludes> defined in
pom.xml, so do not rely on them when using wildcards - all compiled test
classes matching the wildcard will be run.
Which Surefire execution is used?
Due to Surefire’s design flaw, tests run multiple times if there are
multiple surefire executions.
To prevent this, if -Dtest=… is specified, non-default executions
are disabled, and standalone-full is used for all tests.
If you need it other way, you can overcome that need:
-
basic-integration-web.surefire with standalone.xml - Configure standalone.xml to be used as server config. -
basic-integration-non-web.surefire - For tests included here, technically nothing changes. -
basic-integration-2nd.surefire - Simply run the second test in another invocation of Maven.
2.2.5. Running against existing AS copy (not the one from
build/target/jboss-as-*)
-Djboss.dist=<path/to/jboss-as> will tell the testsuite to copy that AS into submodules to run the tests against.
For example, you might want to run the testsuite against AS located in
/opt/wildfly-8 :
./integration-tests.sh -DallTests -Djboss.dist=/opt/wildfly-8
The difference between jboss.dist and jboss.home:
jboss.dist is the location of the tested binaries. It gets copied to testsuite submodules.
jboss.home is internally used and points to those copied AS instances (for multinode tests, may be even different for each AS started by Arquillian).
Running against a running JBoss AS instance
Arquillian’s WildFly 39 container adapter allows specifying
allowConnectingToRunningServer in arquillian.xml, which makes it
check whether AS is listening at managementAddress:managementPort, and
if so, it uses that server instead of launching a new one, and doesn’t
shut it down at the end.
All arquillian.xml’s in the testsuite specify this parameter. Thus, if you have a server already running, it will be re-used.
Running against JBoss Enterprise Application Platform (EAP) 6.0
To run the testsuite against AS included JBoss Enterprise Application Platform 6.x (EAP), special steps are needed.
Assuming you already have the sources available, and the distributed EAP
maven repository unzipped in e.g. /opt/jboss/eap6-maven-repo/ :
1) Configure maven in settings.xml to use only the EAP repository. This
repo contains all artifacts necessary for building EAP, including maven
plugins.
The build (unlike running testsuite) may be done offline.
The recommended way of configuring is to use special settings.xml, not
your local one (typically in .m2/settings.xml).
<mirror>
<id>eap6-mirror-setting</id>
<mirrorOf>
*,!central-eap6,!central-eap6-plugins,!jboss-public-eap6,!jboss-public-eap6-plugins
</mirrorOf>
<name>Mirror Settings for EAP 6 build</name>
<url>file:///opt/jboss/eap6-maven-repo</url>
</mirror>
</mirrors>
\2) Build EAP. You won’t use the resulting EAP build, though. The purpose is to get the artifacts which the testsuite depends on.
mvn clean install -s settings.xml -Dmaven.repo.local=local-repo-eap
\3) Run the testsuite. Assuming that EAP is located in /opt/eap6, you
would run:
./integration-tests.sh -DallTests -Djboss.dist=/opt/eap6
2.2.6. Running with a debugger
| Argument | What will start with debugger | Default port | Port change arg. |
|---|---|---|---|
-Ddebug |
AS instances run by Arquillian |
8787 |
-Das.debug.port=… |
-Djpda |
alias for -Ddebug |
||
-DdebugClient |
Test JVMs (currently Surefire) |
5050 |
-Ddebug.port.surefire=… |
-DdebugCLI |
AS CLI |
5051 |
-Ddebug.port.cli=… |
Examples
./integration-tests.sh install -DdebugClient -Ddebug.port.surefire=4040
...
-------------------------------------------------------
T E S T S
-------------------------------------------------------
Listening for transport dt_socket at address: 4040
./integration-tests.sh install -DdebugClient -Ddebug.port.surefire
...
-------------------------------------------------------
T E S T S
-------------------------------------------------------
Listening for transport dt_socket at address: 5050
./integration-tests.sh install -Ddebug
./integration-tests.sh install -Ddebug -Das.debug.port=5005
| JBoss AS is started by Arquillian, when the first test which requires given instance is run. Unless you pass -DtestLogToFile=false, there’s (currently) no challenge text in the console; it will look like the first test is stuck. This is being solved in http://jira.codehaus.org/browse/SUREFIRE-781. |
| Depending on which test group(s) you run, multiple AS instances may be started. In that case, you need to attach the debugger multiple times. |
Debugging clustering tests
With the implementation of WFLY-276 for the clustering test suite module, these tests can be run with the debugger as well. Here are some tips on how to effectively debug:
-
Prepare your IDE with 4 remote debugger sessions templates:
node1with port 8787,node2with port 8788,node3with port 8789, andnode4with port 8790. -
Using
mvn, run a single test (-Dtest=..) in the debug mode (-Ddebug/-Djpda) -
Connect the debugger to each node as they are starting since the JVM is configured to wait for the debugger connection to proceed.
-
Keep connecting to starting nodes as some tests change the topology during the test.
| Container start timeout in Arquillian is currently set to 60 seconds, you may need to increase the timeout depending on your swiftness with connecting the remote debugger. |
2.2.7. Running tests with custom database
To run with different database, specify the -Dds and use these
properties (with the following defaults):
-Dds.jdbc.driver=
-Dds.jdbc.driver.version=
-Dds.jdbc.url=
-Dds.jdbc.user=test
-Dds.jdbc.pass=test
-Dds.jdbc.driver.jar=${ds.db}-jdbc-driver.jar
driver is JDBC driver class. JDBC url, user and pass is as
expected.
driver.version is used for automated JDBC driver downloading. Users
can set up internal Maven repository hosting JDBC drivers, with
artifacts with
GAV = jdbcdrivers:${ds.db}:${ds.jdbc.driver.version}
Internally, JBoss has such repo at http://nexus.qa.jboss.com:8081/nexus/content/repositories/thirdparty/jdbcdrivers/ .
The ds.db value is set depending on ds. E.g. -Dds=mssql2005 sets
ds.db=mssql (since they have the same driver). -Dds.db may be
overridden to use different driver.
In case you don’t want to use such driver, set just
-Dds.db= (empty) and provide the driver to the AS manually.
Not supported; work in progress on parameter to provide JDBC Driver
jar.
2.2.8. Running tests with IPv6
-Dipv6 - Runs AS with
-Djava.net.preferIPv4Stack=false -Djava.net.preferIPv6Addresses=true
and the following defaults, overridable by respective parameter:
| Parameter | IPv4 default | IPv6 default | |
|---|---|---|---|
-Dnode0 |
127.0.0.1 |
::1 |
Single-node tests. |
-Dnode1 |
127.0.0.1 |
::1 |
Two-node tests (e.g. cluster) use this for the 2nd node. |
-Dmcast |
230.0.0.4 |
ff01::1 |
ff01::1 is IPv6 Node-Local scope mcast addr. |
-Dmcast.jgroupsDiag |
224.0.75.75 |
ff01::2 |
JGroups diagnostics multicast address. |
-Dmcast.modcluster |
224.0.1.105 |
ff01::3 |
mod_cluster multicast address. |
Values are set in AS configuration XML, replaced in resources (like ejb-jar.xml) and used in tests.
2.2.9. Running tests with security manager / custom security policy
-Dsecurity.manager - Run with default policy.
-Dsecurity.policy=<path> - Run with the given policy.
-Dsecurity.manager.other=<set of Java properties> - Run with the given
properties. Whole set is included in all server startup parameters.
Example:
./integration-tests.sh clean install -Dintegration.module -DallTests \
\"-Dsecurity.manager.other=-Djava.security.manager \
-Djava.security.policy==$(pwd)/testsuite/shared/src/main/resources/secman/permitt_all.policy \
-Djava.security.debug=access:failure \"
Notice the \" quotes delimiting the whole -Dsecurity.manager.other
property.
2.2.10. Creating test reports
Test reports are created in the form known from EAP 5. To create them, simply run the testsuite, which will create Surefire XML files.
Creation of the reports is bound to the site Maven phase, so it must
be run separately afterward. Use one of these:
./integration-tests.sh site
cd testsuite; mvn site
mvn -pl testsuite site
Note that it will take all test results under testsuite/integration/ -
the pattern is **/*TestCase.xml, without need to specify -DallTests.
2.2.11. Creating coverage reports
Coverage reports are created by JaCoCo.
During the integration tests, Arquillian is passed a JVM argument which
makes it run with JaCoCo agent, which records the executions into
${basedir}/target/jacoco .
In the site phase, a HTML, XML and CSV reports are generated. That is
done using jacoco:report Ant task in maven-ant-plugin since JaCoCo’s
maven report goal doesn’t support getting classes outside
target/classes.
Usage
./build.sh clean install -DskipTests
./integration-tests.sh clean install -DallTests -Dcoverage
./integration-tests.sh site -DallTests -Dcoverage ## Must run in separatedly.
Alternative:
mvn clean install -DskipTests
mvn -rf testsuite clean install -DallTests -Dcoverage
mvn -rf testsuite site -DallTests -Dcoverage
2.2.12. Cleaning the project
To have most stable build process, it should start with:
-
clean target directories
-
only central Maven repo configured
-
clean local repository or at least:
-
free of artefacts to be built
-
free of dependencies to be used (especially snapshots)
-
To use , you may use these commands:
mvn clean install -DskipTests -DallTests ## ...to clean all testsuite modules.
mvn dependency:purge-local-repository build-helper:remove-project-artifact -Dbuildhelper.removeAll
In case the build happens in a shared environment (e.g. network disk), it’s recommended to use local repository:
cp /home/jenkins/.m2/settings.xml .
sed "s|<settings>|<settings><localRepository>/home/jenkins/jenkins-repos/$JOBNAME</localRepository>|" -i settings.xml
Or:
mvn clean install ... -Dmaven.repo.local=localrepo
3. WildFly Testsuite Harness Developer Guide
The testsuite implementation was tracked by WFLY-576, which still contains a lot of useful information.
3.1. Adding a new maven plugin
The plugin version needs to be specified in jboss-parent at the <properties> section of the jboss-parent pom.xml file.
3.3. Properties and their propagation
Propagated to tests through arquillian.xml:
<property name="javaVmArguments">${server.jvm.args}</property>
TBD: https://issues.redhat.com/browse/ARQ-647
3.3.1. JBoss AS instance dir
integration/pom.xml
(currently nothing)
*-arquillian.xml
<container qualifier="jboss" default="true">
<configuration>
<property name="jbossHome">${basedir}/target/jbossas</property>
3.3.2. Server JVM arguments
<surefire.memory.args>-Xmx512m -XX:MaxPermSize=256m</surefire.memory.args>
<surefire.jpda.args></surefire.jpda.args>
<surefire.system.args>${surefire.memory.args} ${surefire.jpda.args}</surefire.system.args>
3.4. Debug parameters propagation
<surefire.jpda.args></surefire.jpda.args> - default
<surefire.jpda.args>-Xrunjdwp:transport=dt_socket,address=${as.debug.port},server=y,suspend=y</surefire.jpda.args> - activated by -Ddebug or -Djpda
testsuite/pom.xml: <surefire.system.args>... ${surefire.jpda.args} ...</surefire.system.args>
testsuite/pom.xml: <jboss.options>${surefire.system.args}</jboss.options>
testsuite/integration/pom.xml: <server.jvm.args>${surefire.system.args} ${jvm.args.ip.server} ${jvm.args.security} ${jvm.args.timeouts} -Dnode0=${node0} -Dnode1=
integration/pom.xml:
<server.jvm.args>${surefire.system.args} ${jvm.args.ip.server} ${jvm.args.security} ${jvm.args.timeouts} -Dnode0=${node0} -Dnode1=${node1} -DudpGroup=${udpGroup} ${jvm.args.dirs}</server.jvm.args>
arquillian.xml:
<property name="javaVmArguments">${server.jvm.args} -Djboss.inst=${basedir}/target/jbossas</property>
3.5. How the WildFly is built and configured for testsuite modules
WildFly instance is copied from ${jboss.dist} to testsuite/target/jbossas.
This defaults to WildFly which is built by the project (build/target/wildfly-*).
testsuite/pom.xml
From ${jboss.home} to ${basedir}/target/jbossas phase generate-test-resources, goal resource-plugin:copy-resources
testsuite/integration/pom.xml
phase process-test-resources: antrun-plugin:
<ant antfile="${basedir}/src/test/scripts/basic-integration-build.xml">
<target name="build-basic-integration"/>
<target name="build-basic-integration-jts"/>
</ant>
Which invokes:
<target name="build-basic-integration" description="Builds server configuration for basic-integration tests">
<build-server-config name="jbossas"/>
<!-- .. -->
</target>
Which invokes:
<!-- Copy the base distribution. -->
<!-- We exclude modules and bundles as they are read-only and we locate the via sys props. -->
<copy todir="@{output.dir}/@{name}">
<fileset dir="@{jboss.dist}">
<exclude name="**/modules/**"/>
<exclude name="**/bundles/**"/>
</fileset>
</copy>
<!-- overwrite with configs from test-configs and apply property filtering -->
<copy todir="@{output.dir}/@{name}" overwrite="true" failonerror="false">
<fileset dir="@{test.configs.dir}/@{name}"/>
<filterset begintoken="${" endtoken="}">
<filter token="node0" value="${node0}"/>
<filter token="node1" value="${node1}"/>
<filter token="udpGroup" value="${udpGroup}"/>
<filter-elements/>
</filterset>
</copy>
4. WildFly Testsuite Test Developer Guide
See also: WildFly Integration Testsuite User Guide
4.1. Pre-requisites
Please be sure to read Pre-requisites - test quality standards and follow those guidelines.
4.2. ManagementClient and ModelNode usage example
final ModelNode operation = new ModelNode();
operation.get(ModelDescriptionConstants.OP).set(ModelDescriptionConstants.READ_RESOURCE_OPERATION);
operation.get(ModelDescriptionConstants.OP_ADDR).set(address);
operation.get(ModelDescriptionConstants.RECURSIVE).set(true);
final ModelNode result = managementClient.getControllerClient().execute(operation);
Assert.assertEquals(ModelDescriptionConstants.SUCCESS, result.get(ModelDescriptionConstants.OUTCOME).asString());
ManagementClient can be obtained as described below.
4.3. Arquillian features available in tests
@ServerSetup
TBD
@ContainerResource private ManagementClient managementClient;
final ModelNode result = managementClient.getControllerClient().execute(operation);
TBD
@ArquillianResource private ManagementClient managementClient;
ModelControllerClient client = managementClient.getControllerClient();
@ArquillianResource ContainerController cc;
@Test
public void test() {
cc.setup("test", ...properties..)
cc.start("test")
}
<arquillian>
<container qualifier="test" mode="manual" />
</arquillian>
// Targeted containers HTTP context.
@ArquillianResource URL url;
// Targeted containers HTTP context where servlet is located.
@ArquillianResource(SomeServlet.class) URL url;
// Targeted containers initial context.
@ArquillianResource InitialContext|Context context;
// The manual deployer.
@ArquillianResource Deployer deployer;
See Arquillian’s Resource Injection docs for more info, https://github.com/arquillian/arquillian-examples for examples.
See also Arquillian Reference.
Note to @ServerSetup annotation: It works as expected only on non-manual containers. In case of manual mode containers it calls setup() method after each server start up which is right (or actually before deployment), but the tearDown() method is called only at AfterClass event, i.e. usually after your manual shutdown of the server. Which limits you on the ability to revert some configuration changes on the server and so on. I cloned the annotation and changed it to fit the manual mode, but it is still in my github branch :)
4.4. Properties available in tests
4.4.1. Directories
-
jbosssa.project.dir - Project’s root dir (where ./build.sh is).
-
jbossas.ts.dir - Testsuite dir.
-
jbossas.ts.integ.dir - Testsuite’s integration module dir.
-
jboss.dist - Path to AS distribution, either built (build/target/jboss-as-…) or user-provided via -Djboss.dist
-
jboss.inst - (Arquillian in-container only) Path to the AS instance in which the test is running (until ARQ-650 is possibly done)
-
jboss.home - Deprecated as it’s name is unclear and confusing. Use jboss.dist or jboss.inst.
4.4.3. Time-related coefficients (ratios)
In case some of the following causes timeouts, you may prolong the timeouts by setting value >= 100:
100 = leave as is,
150 = 50 % longer, etc.
-
timeout.ratio.gen - General ratio - can be used to adjust all timeouts. When this and specific are defined, both apply.
-
timeout.ratio.fs- Filesystem IO
-
timeout.ratio.net - Network IO
-
timeout.ratio.mem - Memory IO
-
timeout.ratio.cpu - Processor
-
timeout.ratio.db - Database
Time ratios will soon be provided by
org.jboss.as.test.shared.time.TimeRatio.for*() methods.
4.5. Negative tests
Testing of invalid deployments is supposed to be leveraging the @ShouldThrowException annotation.
Unfortunately, this feature cannot be used at the moment due to outstanding issue, see WFLY-673.
However, the exception might be obtained using the manual deployer:
|
@Deployment(name = "X", managed = false)
// ...
@Test
public void shouldFail(@ArquillianResource Deployer deployer) throws Exception {
try {
deployer.deploy("X");
} catch (Exception e) {
// do something
}
}
4.6. Clustering tests (WFLY-616)
You need to deploy the same thing twice, so two deployment methods that just return the same thing. And then you have tests that run against each.
@Deployment(name = "deplA", testable = false)
@TargetsContainer("serverB")
public static Archive<?> deployment() { ... }
@Deployment(name = "deplB", testable = false)
@TargetsContainer("serverA")
public static Archive<?> deployment() { ... }
@Test
@OperateOnDeployment("deplA")
public void testA() { ... }
@Test
@OperateOnDeployment("deplA")
public void testA() {...}
4.7. How to get the tests to main branch
-
First of all, be sure to read the "Before you add a test" section.
-
Fetch the newest main:
git fetch upstream # Provided you have the wildfly/wildfly GitHub repoas a remote called 'upstream'. -
Rebase your branch: git checkout WFLY-1234-your-branch; git rebase upstream/main
-
Run whole testsuite (integration-tests -DallTests). You may use https://ci.wildfly.org/viewType.html?buildTypeId=WF_Nightly.
-
If any tests fail and they do not fail in main, fix it and go back to the "Fetch" step.
-
-
Push to a new branch in your GitHub repo:
git push origin WFLY-1234-new-XY-tests -
Create a pull-request on GitHub. Go to your branch and click on "Pull Request".
-
If you have a jira, start the title with it, like - WFLY-1234 New tests for XYZ.
-
If you don’t, write some apposite title. In the description, describe in detail what was done and why should it be merged. Keep in mind that the diff will be visible under your description.
-
-
Keep the branch rebased daily until it’s merged (see the Fetch step). If you don’t, you’re dramatically decreasing chance to get it merged.
-
You might have someone with merge privileges to cooperate with you, so they know what you’re doing, and expect your pull request.
-
When your pull request is reviewed and merged, you’ll be notified by mail from GitHub.
-
You may also check if it was merged by the following:
git fetch upstream; git cherry<branch> ## Orgit branch --contains\{\{<branch> - see}}here -
Your commits will appear in main. They will have the same hash as in your branch.
-
You are now safe to delete both your local and remote branches:
git branch -D WFLY-1234-your-branch; git push origin :WFLY-1234-your-branch
-
5. How to Add a Test Case
Thank you for finding time to contribute to WildFly quality. Covering corner cases found by community users with tests is very important to increase stability. If you’re providing a test case to support your bug report, it’s very likely that your bug will be fixed much sooner.
5.1. Create a test case
Don’t be discouraged, it’s quite easy - a simple use case may even consist of a single short .java file.
Check WildFly test suite test cases for examples.
For more information, see WildFly Testsuite Test Developer Guide. Check the requirements for a test to be included in the testsuite.
Ask for help at WildFly Google user group or Zulip chat.
5.2. Push your test case to GitHub and create a pull request.
For information on how to create a GitHub account and push your code therein, see Hacking on WildFly.
6. Test Quality Standards
All tests must follow these quality standards.
6.1. Verify the Test Belongs in WildFly
Ensure the test validates current WildFly functionality. Do not test the internal functionality of included libraries, as those should be tested in the library’s upstream test suite. However, do test the integration of those libraries with WildFly.
6.2. Add Only Correct and Understandable Tests
Tests must be clear and verifiably correct. If a test is overly complex or uses questionable logic, refactor it to use clear logic or consult with the original author to clarify intent.
6.3. Avoid Duplicate Tests
Before adding a test, verify that the functionality is not already covered elsewhere.
Use tools like git grep to search for existing test coverage.
6.4. Use Descriptive Test Names
Test class and method names should clearly describe what is being tested.
Jira IDs alone are not descriptive and require external lookup to understand the test purpose.
Jira references may be included in test comments or commit messages, e.g. in Javadoc @see.
6.5. Document Tests with Javadoc
All test classes and non-trivial test methods must include Javadoc explaining what functionality is being tested and why.
6.6. Expand Existing Tests When Appropriate
When adding test coverage for functionality similar to an existing test, consider adding test methods to the existing test class rather than creating a new class. Each new test class adds overhead to execution time. Only create a new class when the functionality is sufficiently different to warrant separation.
6.7. Organize Tests by Subsystem
Place integration tests in subpackages under the relevant subsystem (e.g., org.jboss.as.test.integration.ejb.async).
When a test involves multiple subsystems, place it under the package for the specification that defines the primary behavior being tested.
6.8. Document Non-Obvious Specification Requirements
When testing behavior mandated by specifications but not immediately obvious, include comments explaining the requirement (e.g., "Verifies Jakarta EE X.Y.Z - Description of requirement").
6.9. Collocate Test Resources with Test Code
Place integration test resources (deployment descriptors, configuration files, etc.) in the same source directory as the test class. This keeps all test artifacts together and makes tests easier to understand.
6.10. Use Configurable Values for URLs and Ports
Do not hard-code URLs, hostnames, ports, or IP addresses. Use configurable values provided by Arquillian or system properties to ensure tests work with different configurations and IPv6 addresses.
If a necessary configuration property is missing, file a Jira issue with component "Testsuite".
6.11. Follow Best Practices for Commits
-
Keep changes focused on the specific issue or feature being addressed.
-
Do not include unrelated changes such as reformatting or typo fixes in feature commits. Submit these separately.
-
Prefer smaller, focused pull requests over large, difficult-to-review changes.
-
Maintain consistency across commits (e.g., when renaming, update all references).
-
Write clear commit messages that will be meaningful in the project history.
-
Include the Jira ID in commit messages when working on a tracked issue.
6.12. Avoid Blind Timeouts
Do not use Thread.sleep() without checking for the actual condition being awaited.
Use active waiting with timeouts or timeout mechanisms provided by the API being tested.
Make timeouts configurable with reasonable defaults. For groups of similar tests, use a shared configurable timeout value.
6.13. Provide Descriptive Assertion Messages
Always include descriptive messages in assert*() and fail() calls.
Clear messages make test failures easier to diagnose.
// Good
assertTrue("File config.xml should exist", configFile.exists());
// Bad - provides no context on failure
assertTrue(configFile.exists());
6.14. Include Configuration Context in Exceptions
When tests fail due to possible misconfiguration, include the relevant configuration property names and values in exception messages.
File jdbcJar = new File(System.getProperty("jbossas.ts.dir", "."), "integration/src/test/resources/mysql-connector-java-5.1.15.jar");
if (!jdbcJar.exists()) {
throw new IllegalStateException("Cannot find " + jdbcJar + " using ${jbossas.ts.dir} == " + System.getProperty("jbossas.ts.dir"));
}
6.15. Clean Up Resources
-
Close all sockets, connections, streams, and file descriptors in
finallyblocks or use try-with-resources. -
Avoid storing large objects in static fields. If necessary, clear them in
finallyblocks. -
Do not modify server configuration unless you restore it in a
finallyblock or@After*method.
6.16. Keep Tests Configurable
Extract configuration values such as timeouts, paths, URLs, and counts into configurable properties defined at the beginning of the test class. This improves test maintainability and flexibility.
6.17. Follow Coding Standards
Coding standards for production code also apply to test code. Please refer to Hacking on WildFly for the coding standards. Just to mention the most frequent violations, always include the copyright header:
/* * Copyright The WildFly Authors * SPDX-License-Identifier: Apache-2.0 */
and follow our import order for the project:
import static * import java.* import javax.* import others.*
7. Shared Test Classes and Resources
7.1. Among Testsuite Modules
Use the testsuite/shared module.
Classes and resources in this module are available in all testsuite modules - i.e. in testsuite/* Only use it if necessary - don’t put things "for future use" in there. Don’t split packages across modules. Make sure the java package is unique in the WildFly project.
Document your util classes* (javadoc) so they can be easily found and reused! A generated list will be put here.
7.2. Between Components and Testsuite Modules
To share component’s test classes with some module in testsuite, you don’t need to split to submodules. You can create a jar with classifier using this:
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-jar-plugin</artifactId>
<executions>
<execution>
<goals>
<goal>test-jar</goal>
</goals>
</execution>
</executions>
</plugin>
This creates a jar with classifier "tests", so you can add it as dependency to a testsuite module:
<dependency>
<groupId>org.wildfly</groupId>
<artifactId>wildfly-clustering-common</artifactId>
<classifier>tests</classifier>
<version>${project.version}</version>
<scope>test</scope>
</dependency>