CTC++ Host Run-Time add-on (CTCHRT)
Information in this document
corresponds to CTCHRT v2.0
[Remark: As of January 2013 Testwell
has announced end-of-life on this CTC++ add-on package!]
What is CTCHRT
CTC++ Host Run-Time add-on (CTCHRT) is a novel
architectural arrangement for measuring code coverage (and execution
timing) at target machines. Along with the execution of the
instrumented programs, the execution data is written on
the fly to the host, where it is collected to a file
(probefile). Later, at the host, a CTCHRT utility is run to read the
probefile and to map the execution data to the instrumented source
files and to create (or to sum up) the datafile, which
contains the execution data in the normal CTC++ tool chain form.
Thereafter the coverage and timing reports are obtained normally
with the ctcpost and ctc2html
utilities.
The CTCHRT delivery package contains all the needed
CTC++-specific logic, i.e. how the code is instrumented in a special
"CTCHRT-way", how the coverage data is packed into a compact form
(normally 7-10 ASCII characters per one probe), and how the coverage
data is extracted from the probefile and written to
the datafile at the host.
The user needs to implement the low-level data
transfer layer by which the small textual data fragments are sent
from the target to the host and captured into a file there. How
this can be done depends on what is possible with the
given host - target pair. But as an idea, if there were
something like printf() available, then all will be
fine.
If the target can communicate to the host via some
debug channel, it can be used for CTCHRT's data transfer. The CTCHRT
data can be freely intermixed with other debug messages.
However, the individual encoded probes should be atomic. But if they
are not, the host side just loses that individual coverage
hit.
Why CTCHRT
CTCHRT can be considered, if, for example, the
following issues are relevant:
-
The CTCHRT-way instrumentation does not generate
any global writable variables into the instrumented code
files. If there is a requirement to run the code from ROM memory,
also the instrumented code can be run from there.
-
The CTCHRT run-time layer at the target does not
use any data memory, heap or any global writable data
area. The coverage data is immediately written to the host and is
not stored at the target.
-
The CTCHRT run-time layer at the target is very
small and so consumes very little code memory. The low-level data
transfer layer presumably requires some more code memory, but that
is outside of the basic CTCHRT. In many target - host pairs there
is already some debug message writing API, which can be
used.
- The CTCHRT run-time layer at the target need not be
initialised in any way before it could be ready to transfer the
coverage data to the host. The instrumented programs, each
containing one or more instrumented code files, can just be run in
parallel or in sequence in the way the test session
requires.
-
The low-level data transfer layer need not be 100%
robust. If some coverage hits are lost, or the sending is
corrupted (say, interrupted/overlapping writing so that individual
probes are no more atomic), the host side just remains unaware of
such coverage hits.
Why not CTCHRT
CTCHRT may not be applicable to you, if, for
example, the following issues are relevant:
-
The instrumented code is very time-critical.
The continuous coverage data sending may be so time-consuming that
it slows down the program execution
too much. Sometimes the
coverage data sending may be so "hectic" that even if there is a
hardware-accelerated buffering arrangement in the transfer layer,
portions of the coverage data may still get
lost. But we have however
seen customer cases, where the target code execution speed
lowering and the transfer layer capacity have not been a
problem.
-
A not-so-nice thing is also the possible huge size of the
probefile. But this of course depends on how long the test session
is and how intensively the target code runs in the instrumented
files. Luckily current host computers have big
disks.
Comparing CTCHRT to...
Testwell CTC++ has also the following
add-on packages, which can be used in target testing: CTC++
Host-Target add-on (HOTA), CTC++ for Symbian Target
Devices add-on (CTC4STD) and CTC++ Bitcov add-on
(Bitcov). Why they couldn't be used? Below there are some
remarks of the subject.
HOTA:
-
HOTA would give the lowest run-time
overhead. The additional code brought to the
instrumentation compiles to inline code, not many extra
machine instructions per probe. In CTCHRT the run-time overhead is
much more.
-
HOTA causes global
writable variables in the instrumented files. Some target
run-time architectures do not allow global writable variables.
CTCHRT does not need global writable variables.
-
In HOTA the coverage hits are first
collected to the target machine main memory, either allocated from
heap or allocated statically inside each instrumented file. In
CTCHRT the execution data is not stored at the target, it is
collected at the host.
-
In HOTA the collected execution data
needs to be separately (normally after the test session or
periodically during it) written to the host machine (as an encoded
character stream). In CTCHRT the coverage data is written straight
away to the host.
-
In HOTA the instrumented files and
the HOTA's run-time layer must be in the same address space. In
CTCHRT there can be many instrumented programs, even running in
parallel, each running in their own address space. In CTCHRT the
coverage data collecting is in a way "system-wide", over all the
independent processes at the target.
-
In HOTA the
impulse to the coverage data dumping from memory needs to be
arranged somehow. There are various ways to do it, but that is
anyway a step to be taken care of. In CTCHRT there is no such
step.
CTC4STD:
-
CTC4STD is a Symbian OS specific
thing. CTC4STD can be used on user-mode code only (its run-time
support layer has been built assuming that the execution context
is user-mode). CTCHRT has been adapted to Symbian OS devices and
it can be used both on user-mode and kernel-mode code at the
device.
-
Both CTC4STD and CTCHRT have their
own instrumentation styles. Neither of them cause global
writable variables to the instrumented code.
-
Both CTC4STD and CTCHRT can be
applied on Symbian OS projects so that neither project definition
files nor any other project files need any modifications because
of CTC++ use.
-
Presumably CTC4STD is faster than CTCHRT. But it
depends on the use case if the speed matters. If the code under
test is a single not time-critical program, perhaps composed of
20-50 code files, the CTCHRT might work still fine. CTC4STD can
cope with test sessions of large subsystems that are composed of
many hundreds of code files. In regard to the CTCHRT run-time
speed at the Symbian OS target there is certain development work
going on. Encouraging results exits, but nothing more
specific can be told of it
yet.
Bitcov:
Bitcov has been developed by Verifysoft
Technology GmbH (Testwell's distributor).
Bitcov is a
derivative work based on HOTA. It is meant for small embedded
micro-controller kind of targets, which have very little free
data memory (RAM) for CTC++'s use, or where the HOTA style to
transfer the coverage data as an encoded ASCII stream to the
host is difficult.
In Bitcov there is one global bit array
in the target main memory where the execution hits are recorded, one
bit per probe. For example with a 1000 byte array 8000 probes
could be recorded. It might be well enough for a reasonable
size instrumented program, around 30000 lines of instrumented code.
In normal instrumentation CTC++ run-time data area consumption
would be in a similar case 8000 * 4 bytes (one counter is
normally 4 bytes) = 32000 bytes + something more
for CTC++'s internal control data needs. After the test
run the bit array is captured to the host, where it is
converted to a form, which is suitable to the HOTA tool chain for
further processing (ctc2dat, ctcpost, ctc2html).
-
All the files that are to be
measured at the target need to be instrumented in "one
shot".
-
Coverage data is collected at the
target memory into a 0-initialized global bit vector
-
All
instrumented code needs to be in the same address space for being
able to access the bit array.
-
Timing instrumentation is not
supported.
-
There is no CTC++ run-time
layer at the target! The instrumented files write directly to the
bit vector.
-
Bitcov has the smallest footprint
in regard to CTC++'s overhead to the data memory
requirements at the target.
-
Bitcov has about similar
footprint as for example HOTA in regard to CTC++'s overhead
to the code memory requirements at the target, i.e. how much the
instrumentation increases the code size.
-
In coverage reports the counters are
reduced to 0 (not executed) and 1 (executed). In normal coverage
report the counter value tells how many times the code at the
probe location was
executed. |