
Test scenarios
HP deployed the configuration shown in Figure 2 to simulate an HP SBC environment. To simulate
typical HP SBC workloads, HP ran a series of performance tests based on the Heavy, Medium, and
Light User scripts described in
Table 3.
For each test scenario, HP began by running the appropriate script with a group of ten simulated
users. Start times were staggered to eliminate authentication overhead. After the sessions finished, HP
added ten more users, then repeated the testing.
Monitoring processor utilization
Primarily, HP monitored processor utilization to establish the optimal number of users supported by the
HP SBC server. By definition, the optimal number of users is active when processor utilization reaches
80%
2
.
To obtain this key performance metric, HP used the Windows Performance Monitor (Perfmon) analysis
tool to monitor % Processor Time values.
Validation using a canary script
To validate the scalability metrics obtained using % Processor Time, HP also ran canary scripts to
characterize Heavy User response times for discrete activities (such as the time taken for an
application to be invoked or for a modal box to appear).
By monitoring these response times as more and more users logged on, HP was able to obtain further
scalability metrics.
Note:
When using canary scripts, HP considers optimal user scalability to
be reached when response times increase markedly over a
baseline measurement.
2
Historically, HP has defined the optimal number of users as the number of users that are active when processor utilization (%
Processor Time) reaches 80%. Additional users are supported but response times may become unacceptable.
8