Common service concepts
The available test service types (validation, messaging and processing) differ significantly on their purpose and use. Nonetheless they share certain common concepts that make their use by the test bed consistent. The following points summarise the high-level concepts that are common across service types:
All services are triggered by test bed calls that are captured with appropriate GITB TDL steps.
A service is identified by setting in the test case its WSDL URL in a handler attribute.
All services are capable of receiving input in the form of parameters and configuration and returning arbitrary output.
All service APIs foresee a
getModuleDefinition
operation that is used to document the services’ use, notably how to call them and what they return.All services can be called through the test bed or directly via a SOAP web service client.
Services are web applications that can be as simple or as complicated as needed.
Template services exist per case to facilitate service development (see Service template).
The sub-sections that follow address additional common concerns of a more detailed nature.
Documenting input and output parameters
The getModuleDefinition
operation of each service is used to primarily define the inputs the service expects as well as its outputs. Of these, defining
the input parameters is of most importance as this determines how the service should be called and, in case it is called by the test bed, serves to proactively
check for missing required input. The following example illustrates a service where two inputs are defined:
public GetModuleDefinitionResponse getModuleDefinition(Void parameters) {
GetModuleDefinitionResponse response = new GetModuleDefinitionResponse();
response.setModule(new MessagingModule());
...
response.getModule().setInputs(new TypedParameters());
response.getModule().getInputs().getParam().add(createParameter("messageToSend", "string", UsageEnumeration.O, ConfigurationType.SIMPLE, "The message to send."));
response.getModule().getInputs().getParam().add(createParameter("confirmationCode", "string", UsageEnumeration.O, ConfigurationType.SIMPLE, "The received confirmation code."));
return response;
}
private TypedParameter createParameter(String name, String type, UsageEnumeration use, ConfigurationType kind, String description) {
TypedParameter parameter = new TypedParameter();
parameter.setName(name);
parameter.setType(type);
parameter.setUse(use);
parameter.setKind(kind);
parameter.setDesc(description);
return parameter;
}
Parameters are defined using the TypedParameter
class which in the example is created using a helper method (createParameter()
). The information needed to define
a parameter is summarised in the following table.
Property |
Description |
---|---|
name |
The name of the parameter. This will be used to identify it both when calling via the test bed as well as in standalone calls. |
type |
The type of the parameter corresponding to one of the GITB types that can be used in test cases. |
use |
Whether or not the parameter is required ( |
kind |
The way in which the input parameter is configured. This can always be set to |
desc |
The description of the parameter to be displayed in the result of a |
Finally, note that output parameters may also be defined in getModuleDefinition
using the same construct. This however is purely done for documentation purposes as there is
no automatic type checking or verification. Unless you want to fully document a service’s outputs you can skip their definition.
Note
Defining list inputs: When defining an input of type list
a good practice is to also specify the expected contained type (i.e. the type of its elements).
Do this by setting the type of the input in the getModuleDefinition
response using the form list[string]
rather than list
(which however also works).
Using inputs
All service types expect inputs to be passed to them. Inputs are used in the following operations:
In each case inputs are received as a List
of AnyContent
objects. The AnyContent
class provides a representation of the passed input including the
metadata needed to determine its value. It includes the following properties relative to inputs:
Property |
Description |
---|---|
name |
The name of the input, matching the documented name from the |
value |
The value of the input as a |
embeddingMethod |
The way to process the value property ( |
type |
The GITB type that corresponds to this input value. |
encoding |
The encoding to consider in case the value is a BASE64 string representing bytes. |
item |
A nested |
Note
AnyContent for outputs: The AnyContent
type is also used to construct service outputs. For more information see Returning outputs.
As you see, regarding list
and map
types, the value
property is empty, giving place to the item
which contains the list of contained values. In case of
a map
the contained AnyContent
objects’ name
property corresponds to their map
key value. No name
is present for the objects of a list
. Note that
map
objects may contain further list
and map
objects at an arbitrary depth using the approach explained. This structure is reflected when
calling a service with map
or list
inputs in a standalone manner (i.e. using a SOAP client) as illustrated in the following example.
<?xml version="1.0" encoding="UTF-8"?>
<soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:v1="http://www.gitb.com/vs/v1/" xmlns:v11="http://www.gitb.com/core/v1/">
<soapenv:Header/>
<soapenv:Body>
<v1:ValidateRequest>
<sessionId>12345</sessionId>
<!-- A simple string input. -->
<input name="aSimpleInput" embeddingMethod="STRING">
<v11:value>a_value</v11:value>
</input>
<!-- A list input containing two strings. -->
<input name="aListInput">
<v11:item embeddingMethod="STRING">
<v11:value>value1</v11:value>
</v11:item>
<v11:item embeddingMethod="STRING">
<v11:value>value2</v11:value>
</v11:item>
</input>
<!--
A map input containins a string for key "key1" and a nested map for key "key2".
The nested map contains a single string entry under key "SubKey1".
-->
<input name="aMapInput">
<v11:item name="key1" embeddingMethod="STRING">
<v11:value>value1</v11:value>
</v11:item>
<v11:item name="key2">
<v11:item name="Subkey1">
<v11:value>subValue2</v11:value>
</v11:item>
</v11:item>
</input>
</v1:ValidateRequest>
</soapenv:Body>
</soapenv:Envelope>
Interpreting an input value
The value of a simple input (i.e. not a list
or map
) is provided through the value
property of its AnyContent
object. In order to determine how
this value should be considered you need to make use of the embeddingMethod
(enumeration ValueEmbeddingEnumeration
) and encoding
properties as follows:
STRING
: The input to consider is thevalue
property as-is. It is already directly provided.
BASE64
: Thevalue
property is a sequence of bytes provided as an escaped BASE64 string. This needs to be decoded to retrieve the actual byte content.
URI
: Thevalue
property is a reference to a remote resource that is provided as a URI. This needs to be looked up to obtain the actual input to process.
In both the BASE64
and URI
cases the encoding
property should be used to determine how to consider the input’s bytes when converting to a character stream.
The default encoding if none is provided is assumed to be UTF-8.
Note
Simplified input handling: Supporting all types of embedding methods increases the usage flexibility of your service. In cases where this is not necessary you
can of course simply assume the use of a single embedding method for an input and process it accordingly. If for example you are defining a validation service with
one input for the content and another for the type of validation to perform you may always assume that the content is provided as BASE64
whereas the validation type
is always a STRING
. If you choose to do so make sure you document this appropriately in the inputs’ description in the getModuleDefinition
operation. If you
don’t specify this it may be assumed that the input in question may be provided using any of the supported embedding methods.
Returning outputs
All services are used to return outputs to the test session that is calling them. Specifically:
Validation services return a validation report from their validate operation that may contain an arbitrary set of outputs as its context (see Constructing a validation report (TAR)).
Processing services receive input and produce output through their process operation.
Messaging services return output in the case of the send operation for any information that is useful to report (e.g. the message sent, a synchronous response). The receive operation does not return output itself but received content is returned to the test bed asynchronously through the
notifyForMessage
call-back (see Test bed call-backs).
Service outputs are provided using the AnyContent
class. In the case of processing services this is a List
of AnyContent
objects that is provided directly on
the ProcessResponse
class, whereas for messaging and validation services AnyContent
objects are passed through the TAR
report’s context
property (see Constructing a validation report (TAR)).
The AnyContent
class includes the following properties relative to outputs:
Property |
Description |
---|---|
name |
The name of the output, matching the documented name from the |
value |
The value of the output as a |
embeddingMethod |
The way to process the value property ( |
type |
The GITB type that corresponds to this output value. |
encoding |
The encoding to consider in case the value is a BASE64 string representing bytes. |
item |
A nested |
mimeType |
The mime type relevant to this output, to make more appropriate its presentation to users. |
forDisplay |
A boolean flag defining whether this output should be displayed to users (default is |
forContext |
A boolean flag defining whether this output should be recorded for processing by subsequent test steps (default is |
Note
AnyContent for inputs: The AnyContent
type is also used to read service inputs. For more information see Using inputs.
The most flexible way of returning output is by defining a first AnyContent
object of type map
. This acts as a root under which you can add additional arbitrary named
values with one or more outputs you want to return. Moreover, this map can contain nested AnyContent
objects of type list
or map
allowing you to organise and group outputs
as you wish. Constructing each AnyContent
object follows the same principles in terms of e.g. values and embedding methods as described in the case of inputs (see Using inputs).
The values returned through AnyContent
instances are recorded in the test session context and displayed as part of the relevant step’s report. For binary values or long texts, the display is not
made inline but rather controls are presented to either download the content as a file or view it in a code editor. To facilitate this process you can specify on your AnyContent
outputs an additional
mimeType
property that is set with the content’s mime type (e.g. text/xml
). The result of doing this is twofold:
When downloaded, the relevant file is set with an appropriate extension and content type.
When displayed in an editor, syntax highlighting is applied to improve readability.
The following example illustrates the construction of a complex output structure, including a simple output string and a map
with two properties:
// Create output parameter named "output1" which a simple string value.
AnyContent output1 = new AnyContent();
output1.setName("output1");
output1.setValue("A value");
output1.setEmbeddingMethod(ValueEmbeddingEnumeration.STRING);
// Create a first string property named "property1".
AnyContent output2Property1 = new AnyContent();
output2Property1.setName("property1");
output2Property1.setValue("Value1");
output2Property1.setEmbeddingMethod(ValueEmbeddingEnumeration.STRING);
// Create a second string property named "property2".
AnyContent output2Property2 = new AnyContent();
output2Property2.setName("property2");
output2Property2.setValue("Value2");
output2Property2.setEmbeddingMethod(ValueEmbeddingEnumeration.STRING);
// Create a third string property named "property3" that holds XML content.
AnyContent output2Property3 = new AnyContent();
output2Property3.setName("property3");
output2Property3.setValue(xmlContent);
output2Property3.setMimeType("text/xml");
output2Property3.setEmbeddingMethod(ValueEmbeddingEnumeration.STRING);
// Add the "property1", "property2" and "property3" values under a map named "output2".
AnyContent output2 = new AnyContent();
output2.setName("output2");
output2.getItem().add(output2Property1);
output2.getItem().add(output2Property2);
output2.getItem().add(output2Property3);
// Add both the "output1" and "output2" properties as top-level output items.
AnyContent output = new AnyContent();
output.getItem().add(output1);
output.getItem().add(output2);
// Construct the TAR report and set the outputs as the report's context.
TAR report = new TAR();
report.setContext(output);
Note that the final part of this example (setting the report’s context) applies to validation and messaging services. In the case of processing services, the
output
object would be set directly on the ProcessResponse
class.
Note
Validation service outputs: When used in GITB test cases, the output of validation services will by default not be recorded
in the test session context (recording only a boolean
flag instead). To have additional output recorded you would need to set
the optional output
property (see Using validator output in test cases for details).
Defining the purpose of outputs
Outputs returned by services are used for two purposes:
To set values in the test session context for subsequent processing.
To display to users as feedback on the service’s result.
By default output values defined as AnyContent
instances serve both these purposes, meaning that they are displayed to users and
also recorded in the test session context for subsequent use. When creating an AnyContent
instance you can however be more specific
by explicitly defining the output’s purpose. This is done by means of two boolean flags:
The
forDisplay
flag determines whether the output is displayed to users. Setting tofalse
will hide it when displaying the service’s report.The
forContext
flag determines whether the output is recorded in the test session context. Setting tofalse
will not record it for processing in subsequent steps.
Using these flags enables interesting use cases. It could be that you want to hide certain service outputs from users because they are
internal or sensitive. Setting forDisplay
to false
in this case allows you to hide them while still having them available for
processing. On the other hand, you may want to return outputs purely for presentation purposes that you will never subsequently use.
Setting forContext
to false
will result in these being displayed but not stored in the test session context. Obviously, setting
both flags to false
for a given output makes little sense as it would be neither visible nor processable.
The following example illustrates usage of these flags to fine-tune the purpose of service outputs:
// Create an output parameter not meant to be viewed in reports.
AnyContent internalStatusCode = new AnyContent();
internalStatusCode.setForDisplay(false)
internalStatusCode.setName("internalStatusCode");
internalStatusCode.setValue("CODE1");
internalStatusCode.setEmbeddingMethod(ValueEmbeddingEnumeration.STRING);
// Create an output parameter only for display purposes.
AnyContent userMessage = new AnyContent();
userMessage.setForContext(false)
userMessage.setName("userMessage");
userMessage.setValue("Your message failed to be processed");
userMessage.setEmbeddingMethod(ValueEmbeddingEnumeration.STRING);
// Add to the report's context.
TAR report = new TAR();
AnyContent context = new AnyContent();
context.getItem().add(internalStatusCode);
context.getItem().add(userMessage);
report.setContext(output);
Note
Processing services: Using the forDisplay
and forContext
flags is meaningful only for messaging services
and validation services, where outputs are returned as the TAR report context.
In the case of processing services these flags are ignored, given that the produced report is distinct from the
service outputs allowing you to create each one as you want. In addition, keep in mind that processing services are by default hidden from
users unless they are configured as visible.
Using service outputs in a test session
The previous section deals with the implementation needed on the side of the service to return one or more output values. The current section deals with how these output values can be used in the calling test session.
A service call’s output is stored in the test session context, with a key value that matches the corresponding test case step’s id
attribute. Specifically:
An additional output
attribute is supported for the verify and process steps to enable more control over a service’s results. This is used in
each case as follows:
In the verify step this results in recording the validation report’s context data in the test session context as a
map
named using theoutput
attribute’s value. For more details see Using validator output in test cases.In the process step this overrides the default use of the
id
attribute, storing instead the output in the test session context as a variable named using theoutput
attribute’s value. In addition, in case the service returns only a single output, this is stored directly in the session context rather than be placed in amap
.
These different approaches on using service outputs are illustrated in the following examples. In the first example we consider a case where a file is received
through a messaging service, processed through a processing service and then validated using a validation service. The process step here uses a verbose syntax and
the step’s id
to record its output. The verify step on the other hand ignores any output values produced by the validation service:
...
<!-- Receive the file. -->
<receive id="receiveOutput" desc="Receive file" from="Sender" to="Receiver" txnId="mt1"/>
<!-- Process the file using the "convert" operation. -->
<process id="processOutput" handler="...">
<operation>convert</operation>
<input name="inputFile">$receiveOutput{data}</input>
</process>
<!-- Validate the converted file. -->
<verify handler="..." desc="Validate file">
<input name="inputFile" embeddingMethod="BASE64">$processOutput{convertedData}</input>
</verify>
...
In this example the receive step results in the test bed being notified by the relevant messaging service. This service has returned as output a map
with one
element named “data” that contains the file bytes. Given that the receive step has an id
of “receiveOutput” the test session context now includes a key
with this value that refers to the returned output. In the subsequent process step the file content is referred to with the $receiveOutput{data}
expression (see
the GITB TDL expression documentation for details) when this is passed as the “inputFile” input of the “convert” operation. The result of the process step, in this case a map
with a key “convertedData” pointing to the converted bytes, is stored in the test session context under key “processOutput” (the id
of the process step). Finally,
this converted data is used in the verify step where using the expression $processOutput{convertedData}
it is passed as the expected “inputFile” input.
The second example that follows considers the same scenario but adapts to make use of the process step’s more succinct attribute syntax. In addition, we store the
service’s output directly without wrapping it in a map
:
...
<!-- Receive the file. -->
<receive id="receiveOutput" desc="Receive file" from="Sender" to="Receiver" txnId="mt1"/>
<!-- Process the file using the "convert" operation. -->
<process output="dataToValidate" handler="..." operation="convert" input="$receiveOutput{data}"/>
<!-- Validate the converted file. -->
<verify handler="..." desc="Validate file">
<input name="inputFile" embeddingMethod="BASE64">$dataToValidate</input>
</verify>
...
The receive step is identical in this case. What changes is the how the process step is used, by using attributes versus elements and by explicitly naming the resulting
output variable. In addition, the result is no longer recorded in a map
but rather set as-is. This is reflected by the update in the verify step where we refer to the
processing output with $dataToValidate
. Although such naming may seem trivial it could be helpful in making test cases more straightforward. In addition, it allows you to
directly use values in templates where the naming of session context variables needs to match the template’s placeholders.
The third example that follows assumes that the validation service returns alongside its validation report a calculated digest value and size for the validated file. These
can be leveraged in the test by setting the output
attribute on the verify step. Doing this instructs the test bed to record the returned validation report’s context in
the session context apart from just using it for display purposes:
...
<!-- Receive the file. -->
<receive id="receiveOutput" desc="Receive file" from="Sender" to="Receiver" txnId="mt1"/>
<!-- Process the file using the "convert" operation. -->
<process output="dataToValidate" handler="..." operation="convert" input="$receiveOutput{data}"/>
<!-- Validate the converted file and record the file's metadata. -->
<verify output="metadata" handler="..." desc="Validate file">
<input name="inputFile" embeddingMethod="BASE64">$dataToValidate</input>
</verify>
<!-- Use the values returned by the validation service. -->
<log>$metadata{digest}</log>
<log>$metadata{size}</log>
...
As you can see, the verify step is adapted here to record its output under a variable named “metadata”. This results in the validation report’s context values (assumed to be
named “digest” and “size”) to be recorded in the session context for subsequent use (referred to as $metadata{digest}
and $metadata{size}
respectively).
Constructing a validation report (TAR)
The TAR
report (short for “Test Assertion Report”) is a class used to return the result of processing along with a “success” or
“failure” indication. This is used:
By validation services to return the validation result from the validate operation (the GITB TDL verify step).
By messaging services to return output from the send operation (the GITB TDL send step) as well as the asynchronously returned content relevant to the receive and listen steps, returned to the test bed through the
notifyForMessage
call-back operation (see Test bed call-backs).By processing services to return a “success” or “failure” status from the process operation (the GITB TDL process step).
The information included in the TAR
report can be split in three main sections:
The
context
, where arbitrary data can be added to be returned to the test bed.The
reports
, containing individual items for errors, warnings and information messages (if applicable).The general information on the report’s
date
, overallresult
and reportcounters
(the latter if applicable).
The following table documents each of these properties:
Property |
Description |
---|---|
counters.nrOfAssertions |
The number of information-level findings resulting from a validation. |
counters.nrOfWarnings |
The number of warning-level findings resulting from a validation. |
counters.nrOfErrors |
The number of error-level findings resulting from a validation. If at least one such finding exists the overall report should be marked as failed. |
reports.infoOrWarningOrError |
A |
context |
A |
date |
The timestamp of the report’s creation. |
result |
The overall result of the service call which can be |
Overall when creating a TAR
instance the properties you must always populate are the date
and result
. The result
may at first inspection seem applicable only to
validation services, however it is useful also in the case of messaging and processing services as it allows an error to be immediately signalled. An example of this
could be a failure in the communication between a messaging service and a remote system which can be caught, reported as a result of type TestResultType.FAILURE
and further documented using values returned in the context
property.
For validation services, the infoOrWarningOrError
list is of special importance as it presents to users the detailed validation results, along with the corresponding
summary counters in the nrOfAssertions
, nrOfWarnings
and nrOfErrors
properties. Constructing each element of the infoOrWarningOrError
list is achieved by:
Creating the report item’s content as an instance of class
BAR
.Creating a wrapper for this instance using the GITB JAXB
ObjectFactory
that identifies it as an info, warning or error message:
objectFactory.createTestAssertionGroupReportsTypeInfo()
for information messages.
objectFactory.createTestAssertionGroupReportsTypeWarning()
for warning messages.
objectFactory.createTestAssertionGroupReportsTypeError()
for error messages.
When constructing the BAR
instance for a report item you can set the properties as defined in the following table:
Property |
Required? |
Description |
---|---|---|
description |
yes |
The message to display in the report as the report item’s description. |
test |
no |
The test that resulted in this report item (e.g. a regular expression or a Schematron assertion). |
location |
no |
An indication of the relevant location in the validated content to highlight in relation to the report item. This is an arbitrary text that should make sense to the validation client. |
Note
Highlighting a report item’s location in the test bed: When displaying a verify step’s result, the GITB test bed leverages the location
property of a
report item to open a code editor at the specified location with the relevant message displayed in-lined. This is possible for text-based validated content, for which
you need to do the following:
Include the content to highlight (typically the validation input) as a property in the
TAR
report’scontext
with a given name (e.g. “INPUT”).Set the report item’s
location
property with a string of the format “NAME:LINE:COLUMN” where “NAME” is the name of the report’s context item, “LINE” is the line number and “COLUMN” is the column. Setting this for example to “INPUT:100:0” will link to line 100 of the “INPUT” content.
The following code sample provides an example populating a report for a validation service’s validate output:
private TAR createValidationReport(List<String> errorMessages, String validatedContent) throws Exception {
TAR report = new TAR();
// Add the current timestamp to the report.
GregorianCalendar calendar = new GregorianCalendar();
report.setDate(DatatypeFactory.newInstance().newXMLGregorianCalendar(calendar));
// Add the detailed report items.
report.setReports(new TestAssertionGroupReportsType());
for (String errorMessage: errorMessages) {
BAR itemContent = new BAR();
itemContent.setDescription(errorMessage);
report.getReports().getInfoOrWarningOrError().add(objectFactory.createTestAssertionGroupReportsTypeError(itemContent));
}
// Add the report item counters.
report.setCounters(new ValidationCounters());
report.getCounters().setNrOfAssertions(BigInteger.valueOf(0));
report.getCounters().setNrOfWarnings(BigInteger.valueOf(0));
report.getCounters().setNrOfErrors(BigInteger.valueOf(errorMessages.size()));
// Add the input received in the report's context to be reported back to the client.
report.setContext(new AnyContent());
report.getContext().getItem().add(createAnyContent("INPUT", validatedContent, ValueEmbeddingEnumeration.STRING));
// Determine the overall result to report based on the validation results.
if (errorMessages.isEmpty()) {
report.setResult(TestResultType.FAILURE);
} else {
report.setResult(TestResultType.SUCCESS);
}
return report;
}
public AnyContent createAnyContent(String name, String value, ValueEmbeddingEnumeration embeddingMethod) {
AnyContent input = new AnyContent();
input.setName(name);
input.setValue(value);
input.setType("string");
input.setEmbeddingMethod(embeddingMethod);
return input;
}
This method would be called in a validation service to create a TAR
object to return from the validate operation. The method is assumed to be called
after validation has taken place in order to build the report. Only error messages are considered for simplicity whereas the report items included contain the minimum description.
Note how the validated content is also returned in the report’s context
. This is not required but provides an example of how arbitrary data can be returned. Moreover, this would
be especially useful if each error item also included its relevant location
.
This example shows construction of the report after the actual validation has taken place. Decoupling domain-specific logic (i.e. the validation) from GITB-related code is a good practice as the GITB service API may only be one facade of many. In practice however it could be more tricky to achieve as report construction is often done in parallel to the validation (e.g. via error listener constructs). Whether you choose to enforce a full decoupling of domain-specific code from GITB code is a design choice you will need to make.
To present a simpler case of report construction you can consider the following example from a messaging service:
private TAR createNotificationReport(String receivedContent) throws Exception {
TAR report = new TAR();
// Set the step result.
report.setResult(TestResultType.SUCCESS);
// Set the date.
report.setDate(DatatypeFactory.newInstance().newXMLGregorianCalendar(new GregorianCalendar()));
// Add the received content to the report's context to return it.
report.setContext(new AnyContent());
AnyContent messageOutput = new AnyContent();
messageOutput.setName("MESSAGE");
messageOutput.setValue(receivedContent);
messageOutput.setEmbeddingMethod(embeddingMethod);
report.getContext().getItem().add(messageOutput);
return report;
}
In this example we pass back the message received to the test bed along with an overall “success” result and timestamp. The test session will show the corresponding GITB TDL step as successful and will expose the received content in the test session context for subsequent use (see Using service outputs in a test session for details).
Finally, the simplest kind of report is the one returned from reporting services as in this case the report itself is not used to return output. In this case the only requirements
for the report are to complete its result
and date
.
private TAR createProcessingReport() throws Exception {
TAR report = new TAR();
// Set the step result.
report.setResult(TestResultType.SUCCESS);
// Set the date.
report.setDate(DatatypeFactory.newInstance().newXMLGregorianCalendar(new GregorianCalendar()));
return report;
}
Reporting service errors
Unexpected service errors can be handled in two ways:
They can be simply left uncaught, resulting in a SOAP fault.
They can be caught and signalled by returning the output
TAR
report with resultTestResultType.FAILURE
.
Both approaches will result in the test bed displaying the relevant GITB TDL step as failed. The approach of returning a TAR
report with a TestResultType.FAILURE
result
could be interesting if you want to return additional information regarding the error. This approach is possible for services linked to GITB TDL steps that are visually presented,
i.e. those of validation and messaging services.
Contributing to test session logs
Test services, apart from returning outputs and reports to test sessions, can also contribute entries to their log outputs. Each test session generates a log consisting of progress messages that complement its test execution diagram as a means of providing additional feedback to testers. Logged messages are generated automatically by the test bed but can also be explicitly added by means of the GITB TDL log step. All messages come with a severity level, ranging from debug and information messages to warnings and errors.
As an alternative or complement to using the GITB TDL log step you can also have your custom test services contribute log entries. This could be done to add additional information on processing taking place within the test service, or to provide feedback to the user in case the test execution diagram is not sufficient. Contributing log entries is supported for all types of custom test services (validation, messaging and processing services) and is achieved by making a web service call on the log operation of the test bed’s SOAP API.
The log operation, is separately defined per type of service but in all cases expects as parameters:
The session ID for which the log entry is to be added.
The message to add as a text.
The message’s severity level (
ERROR
,WARNING
,INFO
orDEBUG
).
It is important to note that the session ID is not necessarily the ID of the test session as defined within the test bed, but rather the session ID that the test service uses to manage its state. Specifically, this is:
For validation services the actual test session ID used in the test bed.
For messaging services the session ID returned with the output of the initiate operation.
For processing services the session ID returned with the output of the beginTransaction operation.
This session ID is included in all calls made by the test bed to the test service, allowing the test service to use it when making the call to add log entries to the test session. It is the test bed that then maps these session IDs to the test sessions that are to be updated.
Regarding the content log message to add, this is a simple text that will be added as-is to the test session log. It is interesting to note that when using the GITB TDL log step within a test case, you can use expressions to dynamically produce the log entry, referring for example to variables recorded in the test session’s context. Using similar expressions in logging via test services is not supported. In other words, the provided message is considered as a simple text, not as an expression to evaluate.
Finally, it is important to explain how to determine the address of the test bed’s endpoint that receives log contents. The test service needs to determine this given that log entries are not communicated as synchronous responses to received test bed calls. In fact, log entries should ideally be handled in an asynchronous manner to avoid blocking the service’s main processing (e.g. the validation of inputs for a validation service). The approach followed to determine the test bed’s logging endpoint is to use WS-Addressing whereby the test bed includes a specific SOAP header with a reply address whenever it calls the test service. When developing the test service you thus have two approaches available to determine the test bed’s endpoint address:
Lookup the WS-Addressing header and use its value as the endpoint address.
Skip the dynamic lookup by simply adding the test bed’s endpoint address to the service’s configuration.
Using WS-Addressing makes this process transparent and never needs updates for address changes. In addition, it permits the same test service instance to be used at the same time by multiple test bed instances if this is needed. If you choose to simply define the test bed callback address as part of the service’s configuration, you need to ensure that the configured value is the final address to be used by the service, catering for things such as reverse proxies and Docker container names. Assuming the test bed is running without a proxy, on your localhost and with default port mappings (i.e. its a development instance) the default endpoints are:
http://localhost:8080/itbsrv/ValidationClient when called from a validation service.
http://localhost:8080/itbsrv/MessagingClient when called from a messaging service.
http://localhost:8080/itbsrv/ProcessingClient when called from a processing service.
In case your test service is not of a single service type (e.g. it is used both for validation and messaging, implementing both service APIs) you can use any of these endpoints to send log messages. You need to make sure however that the endpoint you use corresponds to the operation for which you are adding a log entry and the session ID communicated as input to that operation. For example, if you want to log something relevant to a validation call you should use the session ID received in the validation call’s inputs and pass it to the test bed’s endpoint for validation services. Not doing so, e.g. using the validation input’s session ID with the test bed’s endpoint for messaging services, will most likely result in the log message being ignored due to the target test session not being found.
Note
Using WS-Addressing to determine the test bed’s endpoint address is also done when messaging services make asynchronous callbacks to signal received messages to test sessions.
Illustrating the above, the following example considers a processing service for which we log the requested operations of process calls.
private static final QName REPLY_TO_QNAME = new QName("http://www.w3.org/2005/08/addressing", "ReplyTo");
@Resource
private WebServiceContext wsContext;
public ProcessResponse process(ProcessRequest request) {
// Log the requested operation in the test session log.
String message = "Service carrying out " + request.getOperation() + " operation...";
log(message, LogLevel.INFO, request.getSessionId())
// Prepare the response.
ProcessResponse response = new ProcessResponse();
// ...
return response;
}
private void log(String message, LogLevel level, String sessionId) {
LogRequest logRequest = new LogRequest();
logRequest.setSessionId(sessionId);
logRequest.setMessage(message);
logRequest.setLevel(level);
// Use WS Addressing to determine the endpoint address.
String callbackAddress = getReplyToAddress();
createClient(callbackAddress).log(logRequest);
}
private String getReplyToAddress() {
List<Header> headers = (List<Header>) wsContext.getMessageContext().get(Header.HEADER_LIST);
for (Header header: headers) {
if (header.getName().equals(REPLY_TO_QNAME)) {
String replyToAddress = ((Element)header.getObject()).getTextContent().trim();
if (!replyToAddress.toLowerCase().endsWith("?wsdl")) {
replyToAddress += "?wsdl";
}
return replyToAddress;
}
}
return null;
}
private ProcessingClient createClient(String callbackAddress) {
JaxWsProxyFactoryBean proxyFactoryBean = new JaxWsProxyFactoryBean();
proxyFactoryBean.setServiceClass(ProcessingClient.class);
proxyFactoryBean.setAddress(callbackAddress);
ProcessingClient serviceProxy = (ProcessingClient)proxyFactoryBean.create();
return serviceProxy;
}
Retrieving test session metadata
Calls made by the test bed to test services take place in the context of test sessions. In all service calls the test bed includes metadata on the relevant test session as SOAP header elements, making it available to the service in case this is needed. Information on the test session is included in the SOAP header to avoid overburdening the operations’ inputs, given that test session information is typically not needed in most scenarios.
All metadata elements included in the SOAP header are simple text values and use the namespace http://www.gitb.com
.
The included elements are as follows:
TestSessionIdentifier
: The session identifier as defined within the test bed. For messaging and processing services, this will likely differ from the session identifier included as input in operations given that the value of this is generated by the service itself.TestCaseIdentifier
: The identifier of the test case that relates to the test session that can serve to uniquely identify the test case in its test suite. Note that across test suites and specifications, this identifier is not guaranteed to be unique.TestStepIdentifier
: The value of theid
attribute of the test step that triggered the service call (e.g. the verify step that resulted in a validation service’s validate call).TestEngineVersion
: The version number of the test engine, matching the version number of the GITB TDL and the GITB test services’ APIs.
Note
The TestStepIdentifier
is not present in service calls occurring outside the scope of a test step. This is the case for a
messaging service’s initiate and finalize operations,
or for all services’ getModuleDefinition operation.
The use cases for such metadata are varied. For example using the TestSessionIdentifier
could be interesting for
logging purposes if you would want to match the identifiers used within the test bed itself. On the other hand using
the TestCaseIdentifier
and TestStepIdentifier
could prove useful if your service uses them to determine the
operations to carry out (e.g. a data generation template defined as a resource within the test service).
To illustrate how this information is passed in service calls, consider the following sample payload of a messaging service’s receive operation. The highlighted section corresponds to the included test session metadata:
<soap:Envelope xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/">
<soap:Header>
<Action xmlns="http://www.w3.org/2005/08/addressing">http://gitb.com/MessagingService/receive</Action>
<MessageID xmlns="http://www.w3.org/2005/08/addressing">urn:uuid:6884ceea-80fd-4566-9317-9848ff8e68c4</MessageID>
<To xmlns="http://www.w3.org/2005/08/addressing">http://localhost:8001/oop/services/messaging</To>
<ReplyTo xmlns="http://www.w3.org/2005/08/addressing">
<Address>http://localhost:8080/itbsrv/MessagingClient</Address>
</ReplyTo>
<gitb:TestSessionIdentifier xmlns:gitb="http://www.gitb.com">0f8f5da0-3a6b-4adb-9b5a-c5383c000178</gitb:TestSessionIdentifier>
<gitb:TestCaseIdentifier xmlns:gitb="http://www.gitb.com">testCase1</gitb:TestCaseIdentifier>
<gitb:TestStepIdentifier xmlns:gitb="http://www.gitb.com">receiveMessage1</gitb:TestStepIdentifier>
<gitb:TestEngineVersion xmlns:gitb="http://www.gitb.com">1.18.1</gitb:TestEngineVersion>
</soap:Header>
<soap:Body>
<ns4:ReceiveRequest xmlns:ns2="http://www.gitb.com/core/v1/" xmlns:ns3="http://www.gitb.com/tr/v1/" xmlns:ns4="http://www.gitb.com/ms/v1/">
<sessionId>9ff2d8e1-fc5e-4892-a711-05bdc42238f7</sessionId>
<callId>5b69d251-06d2-4e28-afc8-7c54a0c9c89a</callId>
<input embeddingMethod="STRING" name="inputMessage" type="string">
<ns2:value>Input text</ns2:value>
</input>
</ns4:ReceiveRequest>
</soap:Body>
</soap:Envelope>
How this information is accessed within your test service depends on the language and framework used to implement the given test service. If using the Spring Framework as the other code samples here, you would extend the class implementing your service’s operations as as follows:
@Component
public class MessagingServiceImpl implements MessagingService {
private static final QName TEST_STEP_ID_QNAME = new QName("http://www.gitb.com", "TestStepIdentifier", "gitb");
private static final QName TEST_SESSION_ID_QNAME = new QName("http://www.gitb.com", "TestSessionIdentifier", "gitb");
private static final QName TEST_CASE_ID_QNAME = new QName("http://www.gitb.com", "TestCaseIdentifier", "gitb");
private static final QName TEST_ENGINE_VERSION_QNAME = new QName("http://www.gitb.com", "TestEngineVersion", "gitb");
@Resource
private WebServiceContext wsContext;
/**
* Extract a value from the SOAP headers.
*
* @param name The name of the header to locate.
* @param valueExtractor The function used to extract the data.
* @return The extracted data.
* @param <T> The type of data extracted.
*/
private <T> T getHeaderValue(QName name, Function<Header, T> valueExtractor) {
return ((List<Header>) wsContext.getMessageContext().get(Header.HEADER_LIST))
.stream()
.filter(header -> name.equals(header.getName())).findFirst()
.map(valueExtractor).orElse(null);
}
/**
* Get the specified header element as a string.
*
* @param name The name of the header element to lookup.
* @return The text value of the element.
*/
private String getHeaderAsString(QName name) {
return getHeaderValue(name, (header) -> ((Element) header.getObject()).getTextContent().trim());
}
@Override
public SendResponse send(SendRequest parameters) {
// Log the session's metadata relevant to this service call.
LOG.info("Called send from test session {}. Test case is {} and step is {}.",
getHeaderAsString(TEST_SESSION_ID_QNAME),
getHeaderAsString(TEST_CASE_ID_QNAME),
getHeaderAsString(TEST_STEP_ID_QNAME)
);
// Proceed with the operation's implementation.
...
}
}
Note
Accessing metadata within test cases: The test bed also makes accessible test session metadata directly in test cases. The test session identifier, test case identifier and test engine version can also be accessed through the test session context.
Using the gitb-types library
The gitb-types
library is a Java library maintained by the Test Bed team and published on Maven Central.
It includes the GITB specification resources (its XSDs and WSDLs) as well as ready-to-use generated classes
to be used by test developers. Using the library in test services is not mandatory (one can generate their own classes or skip using classes
altogether), but it is typically simpler to reuse it as-is.
Three variants of the library are available for test developers depending on your needs:
gitb-types-specs: Contains only the specification XSDs and WSDLs without any class definitions.
gitb-types: Defines classes generated from the specifications and annotated using javax APIs for use in Java EE (pre-Jakarta) projects.
gitb-types-jakarta. Defines classes generated from the specifications annotated using Jakarta APIs for use in Jakarta EE projects.
For your development you would typically use either the gitb-types
or gitb-types-jakarta
library. Which one you choose depends on your
technology stack and specifically on whether you use JAXB (Java to/from XML conversion) and JAX-WS (Java SOAP services) via the javax (javax.*
)
or Jakarta (jakarta.*
) APIs. These are used in annotations in the library’s generated classes to allow automated (de)serialisations and service
documentation. The implementations of such APIs are typically defined in the framework you use for web development, and need to remain consistent
across your libraries.
Note
The archetypes for template services define the gitb-types-jakarta
library as a dependency. Currently the Jakarta API variant
is used as this aligns with the included version of CXF and Spring Boot.