<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[performance testing - Matthias Lee - Musings on Software and Performance Engineering]]></title><description><![CDATA[Matthias Lee is a Software Performance Engineer, Technical Lead and Computer Science PhD. Currently a Principal Performance Engineer at Appian.]]></description><link>https://matthiaslee.com/</link><generator>Ghost 2.14</generator><lastBuildDate>Tue, 18 Nov 2025 07:32:35 GMT</lastBuildDate><atom:link href="https://matthiaslee.com/tag/performance-testing/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[Don't blindly trust your summary statistics.]]></title><description><![CDATA[<p><strong>Summary statistics</strong> are a common way to evaluate and compare performance data. They are simple, easy to compute and most people have an intuitive understanding of them, therefore mean, median, standard deviation and percentiles tend to be the default metrics used to report, monitor and compare performance.<br>
Many of the</p>]]></description><link>https://matthiaslee.com/dont-blindly-trust-your-summary-statistics/</link><guid isPermaLink="false">5a0a60d882e47c00018dabf2</guid><category><![CDATA[performance testing]]></category><category><![CDATA[tends]]></category><category><![CDATA[distribution]]></category><category><![CDATA[summary statistics]]></category><dc:creator><![CDATA[Matthias A. Lee]]></dc:creator><pubDate>Mon, 15 May 2017 01:26:53 GMT</pubDate><content:encoded><![CDATA[<p><strong>Summary statistics</strong> are a common way to evaluate and compare performance data. They are simple, easy to compute and most people have an intuitive understanding of them, therefore mean, median, standard deviation and percentiles tend to be the default metrics used to report, monitor and compare performance.<br>
Many of the common Load and Performance testing tools (<a href="https://httpd.apache.org/docs/2.4/programs/ab.html">ApacheBench</a>, <a href="https://github.com/httperf/httperf">Httperf</a> and <a href="http://locust.io">Locust.IO</a>) produce reports using these metrics to summarize their results. While easy to understand, they rely on the assumption that what you are measuring stays constant during your test and even more importantly that the set of samples follow a normal distribution, often this is <em>not</em> the case.<br>
In this post we will evaluate two tricky scenarios which I have seen come up in real world testing. First a simple example to show how two very different distributions can have the same summary statistics and second an example of how summary statistics and distributions can conceal underlying problems.</p>
<p><strong>First</strong>, let's start with a simple set of performance test results, featuring 1000 samples of a web service endpoint.</p>
<p><img src="https://matthiaslee.com/content/images/2017/05/bimodal_latencies-1.png" alt="figure 1: 1000 latency samples of a web service endpoint"></p>
<p>Our favorite summary statistics have been overlaid, showing us the mean, median and +/- standard deviation. At first glance there is nothing interesting about these results. We see some variability, perhaps due to network jitter or load on the system, but otherwise a pretty consistent result. <em>Can you identify any interesting features? Given these summary statistics, could you determine a change in behaviour?</em><br>
In the above example, the mean sits at 26.6, the median at 26.01 and we have a standard deviation of around 3. Given that the median is slightly lower than the mean suggests we may have a positively <a href="https://en.wikipedia.org/wiki/Skewness">skewed distribution</a>. Which is a common feature of latency distributions, since there are at least a few packets that always hit snags such as errors or taking the scenic network path.<br>
If we just look at the summary statistics, we do not get the full picture. <em>Figure 2</em> shows two distributions with an <em>identical</em> mean, median and standard deviation, but as you can see these are very different in shape.<br>
<em>Could you identify which distribution corresponds to the samples from figure 1?</em></p>
<p><img src="https://matthiaslee.com/content/images/2017/05/skewed_normal_dist-2.png#centered" alt="figure 2(a): Skewed Normal Distribution"><br>
<img src="https://matthiaslee.com/content/images/2017/05/bimodal_dist.png" alt="figure 2(b): Bimodal Normal Distribution"></p>
<p>Intuitively and most commonly with latencies, the distribution tends to look more like <em>figure 2(a)</em>, but in our case the actual distribution is as in <em>figure 2(b)</em>. Multi-modal distributions often indicate some sort of caching at work, the lower mode representing a cache hit and the higher mode a cache miss. Understanding changes in the relationship between cache hits and misses is very important, as a rise in cache misses could indicate a serious problem.<br>
Given only the medians, means and standard deviations, it would be impossible to determine any difference, therefore performance changes such as these would never surface. There is no easy solution here besides adding more advanced metrics. One such metric to consider is the <a href="https://en.wikipedia.org/wiki/Kolmogorov%E2%80%93Smirnov_test">Kolmogorov–Smirnov test</a>, which computes the difference between two <a href="https://en.wikipedia.org/wiki/Cumulative_distribution_function">Cumulative Density Functions</a>.</p>
<p><strong>Another</strong> gotcha are results, which when evaluated based on their summary statistics <em>and</em> their distribution (see <em>figure 3</em>), look to be completely normal, no bimodal tendencies, a slight right skew, but nothing that stands out.</p>
<p><img src="https://matthiaslee.com/content/images/2017/05/latency_dist2-2.png" alt="figure 3: Deceptive latency distribution"></p>
<p>These are the trickiest, the ones that don't ring any alarm bells, are the ones that will bite you once you go into production. The critical missing piece is the <a href="https://en.wikipedia.org/wiki/Time_domain">time-domain</a> information of the original results, which by definition cannot be captured by summary statistics or distributions.<br>
Collecting time-domain information usually is not a problem, but on high throughput tests, it may become prohibitively expensive, both memory- and storage-wise. Instead you may be tempted to purely rely on streaming statistics, perhaps using a snazzy sliding-window histogram to do <a href="https://en.wikipedia.org/wiki/Reservoir_sampling">reservoir sampling</a> or something like the <a href="https://github.com/tdunning/t-digest">t-digest</a>. These are fantastic approaches and I am absolutely in favor of using these, but if you do not keep at least some interval-based snapshots of the streaming statistics, you may end up discarding valuable information.<br>
Let's return to our example from <em>figure 3</em>, when viewed as a time-series, see <em>figure 4</em>, it is clear that we have significant trend!</p>
<p><img src="https://matthiaslee.com/content/images/2017/05/trending_latencies.png" alt="figure 4: Trending latencies are invisible when looking at the distribution"></p>
<p>Trends cannot be characterized using summary statistics and add extra complexity to performance comparisons, therefore should be avoided whenever possible.<br>
To ensure that that you trends do not secretly distort your statistics, compute a <a href="https://en.wikipedia.org/wiki/Robust_regression">robust linear regression</a> metric (I've had good luck with the <a href="https://en.wikipedia.org/wiki/Random_sample_consensus">RANSAC</a> algorithm) to quantify the trend in terms of slope and y-intercept. Given these metrics it becomes easy to develop a sanity check to determine whether any drastic trend changes have occurred.</p>
<p><strong>Summary statistics</strong> can be valuable first indicators about performance, but can easily lead to false conclusions if not combined with other metrics. It is especially important to retain time-domain information to be able to detect trends which might otherwise be hidden. Stay tuned for future posts which will deep dive on how to accurately detect the slightest performance changes.</p>
]]></content:encoded></item><item><title><![CDATA[Caching Ghost with Apache for Maximum Performance, 100x faster]]></title><description><![CDATA[<p>Ghost can be a bit CPU hungry, especially for a lightweight (single core) VPS, but all of that can me negated with a little bit of caching. Luckily Apache's <code>mod_disk_cache</code> makes easy work of this.</p>
<h2 id="configuringthecache">Configuring the cache:</h2>
<p>First we need to enable <code>mod_cache</code>, <code>mod_cache_disk</code></p>]]></description><link>https://matthiaslee.com/caching-ghost-with-apache-for-maximum-performance-100x-faster/</link><guid isPermaLink="false">5a0a60d882e47c00018dabf0</guid><category><![CDATA[apache]]></category><category><![CDATA[cache]]></category><category><![CDATA[ghost]]></category><category><![CDATA[performance testing]]></category><dc:creator><![CDATA[Matthias A. Lee]]></dc:creator><pubDate>Sun, 23 Apr 2017 05:35:44 GMT</pubDate><content:encoded><![CDATA[<p>Ghost can be a bit CPU hungry, especially for a lightweight (single core) VPS, but all of that can me negated with a little bit of caching. Luckily Apache's <code>mod_disk_cache</code> makes easy work of this.</p>
<h2 id="configuringthecache">Configuring the cache:</h2>
<p>First we need to enable <code>mod_cache</code>, <code>mod_cache_disk</code> and <code>mod_expires</code>:</p>
<pre><code>sudo a2enmod cache
sudo a2enmod cache_disk
sudo a2enmod expires
</code></pre>
<p>Then edit your virtual host file, usually <code>/etc/apache2/sites-enabled/default.conf</code> (may differ based on setup)</p>
<pre><code>&lt;VirtualHost *:80&gt;
     # Domain name and Alias
     ServerName example.com
     ServerAlias www.example.com

     # Configure Reverse proxy for Ghost
     ProxyPreserveHost on
     ProxyPass / http://localhost:1234/
     ProxyPassReverse / http://localhost:1234/

     CacheQuickHandler off
     CacheLock on
     CacheLockPath /tmp/mod_cache-lock
     CacheLockMaxAge 5
     CacheIgnoreHeaders Set-Cookie

     &lt;Location /&gt;
        # Enable disk cache, set defaults
        CacheEnable disk
        CacheHeader on
        CacheDefaultExpire 600
        CacheMaxExpire 86400
        FileETag All

        # Set cache-control headers for all request
        # which do not have them by default
        # must enable: mod_expires
        ExpiresActive on
        ExpiresDefault &quot;access plus 15 minutes&quot;
    &lt;/Location&gt;

    # While this is not needed since Ghost automatically
    # passes back 'Cache-Control: no-cache, private'
    # It makes me feel better to explicitly state it again.
    &lt;Location /ghost&gt;
        # Don't cache the ghost admin interface
        SetEnv no-cache
    &lt;/Location&gt;

&lt;/VirtualHost&gt;
</code></pre>
<p>Finally we need to restart apache and then we are done!</p>
<pre><code>sudo service apache2 restart
</code></pre>
<p>Now it's time to check whether your cache is working by inspecting the headers.</p>
<pre><code>:~$ curl -i -X GET http://example.com | less
HTTP/1.1 200 OK
Date: Sun, 21 Apr 2017 05:15:13 GMT
Server: Apache
Cache-Control: public, max-age=0, max-age=900
Expires: Sun, 21 Apr 2017 05:30:12 GMT
Age: 832
X-Cache: HIT from example.com
...
</code></pre>
<p>As long as you see an <code>X-Cache</code> and a <code>Cache-Control</code> header it is all working. Now lets see what kind of performance improvement we have achieved.</p>
<h1 id="performancetesting">Performance Testing:</h1>
<p>To quantify the improvement, I broke out <code>ApacheBench</code> and did a couple of quick tests from a neighboring machine.<br>
First test without caching enabled, yielding approximately <strong>25 Requests per second</strong> with a median response time of <strong>~4 seconds!</strong>:</p>
<pre><code>Concurrency Level:      100
Time taken for tests:   40.943 seconds
Complete requests:      1000
Requests per second:    24.42 [#/sec] (mean)
Time per request:       4094.286 [ms] (mean)
Time per request:       40.943 [ms] (mean, across all concurrent requests)
Transfer rate:          282.64 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    1   1.7      1      14
Processing:  1052 3936 670.4   3933    5252
Waiting:     1052 3936 670.4   3933    5252
Total:       1056 3938 669.7   3934    5252

Percentage of the requests served within a certain time (ms)
  50%   3934
  66%   4119
  75%   4261
  80%   4406
  90%   4554
  95%   5067
  98%   5173
  99%   5211
 100%   5252 (longest request)

</code></pre>
<p>After enabling the caching, we get <strong>~2700 Requests per second</strong> with a median response time of <strong>31 milliseconds!</strong>. That is 100x more requests served per second!</p>
<pre><code>Concurrency Level:      100
Time taken for tests:   18.487 seconds
Complete requests:      50000
Failed requests:        0
Total transferred:      597400000 bytes
HTML transferred:       578850000 bytes
Requests per second:    2704.57 [#/sec] (mean)
Time per request:       36.974 [ms] (mean)
Time per request:       0.370 [ms] (mean, across all concurrent requests)
Transfer rate:          31556.82 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    8  52.1      4    1019
Processing:     1   29  19.1     26     693
Waiting:        1   28  16.9     26     659
Total:          2   37  55.1     31    1158

Percentage of the requests served within a certain time (ms)
  50%     31
  66%     34
  75%     37
  80%     40
  90%     46
  95%     51
  98%     67
  99%     91
 100%   1158 (longest request)
</code></pre>
]]></content:encoded></item><item><title><![CDATA[Performance Testing 101 - 5 min intro & example]]></title><description><![CDATA[Introduction to performance testing, using ApacheBench to load test a simple Flask server and optimizing it's performance.]]></description><link>https://matthiaslee.com/performance-testing-101-5-minute-intro/</link><guid isPermaLink="false">5a0a60d882e47c00018dabee</guid><category><![CDATA[performance testing]]></category><category><![CDATA[apache bench]]></category><category><![CDATA[Flask]]></category><dc:creator><![CDATA[Matthias A. Lee]]></dc:creator><pubDate>Fri, 21 Apr 2017 02:56:00 GMT</pubDate><content:encoded><![CDATA[<p>When developing and deploying web services, apps or sites the following questions come up: <em>&quot;How will it perform?&quot;, &quot;How many concurrent users will it support?&quot;, &quot;If I tweak this setting, will it be faster?&quot;, &quot;Do these new features effect performance?&quot;</em>. The list could go on and on and on. Performance questions are common, solid answers are not.</p>
<p>Performance testing can take many different shapes, from dead-simple one-liners to complex setups, tests, tear-downs and analysis. While this article focuses on quick, easy and straightforward testing, future articles will address more advanced topics.</p>
<p>There are some great easy tools to get first ball-park answers to performance questions getting at the number of concurrent users as well as how the response time changes as load increases. Here I'll give a short intro to <code>ApacheBench</code>.</p>
<p>Let us begin with setting up some basic terminology, first let's refer to our machine under test as the <code>host</code>, this can be any kind of http-accessible server you have. Second we will want an <code>agent</code> machine to drive our tests from.<br>
When performance testing, it is key to limit the number of possible variables which could distort our results. Ideally your <code>agent</code> is a separate dedicated machine and as close as possible (network distance wise) to you <code>host</code> system in order to minimize the amount of networking you test. This is especially relevant when you are testing applications hosted in a shared environment (ie cloud). The performance impact of <em>noisy neighbors</em> can be surprising, but that is a topic we will explore in detail in the future.</p>
<h1 id="apachebench">ApacheBench</h1>
<p><strong>ApacheBench</strong> is a command line tool (<code>ab</code>) which allows for simple load driving against HTTP hosts. It's great at producing large numbers of <code>REST</code> requests, capable of producing thousands of requests per second. Generally I find ApacheBench most useful for getting a rough idea of how many requests an application can handle. It's extremely simple to use and therefore a great tool while debugging configurations.</p>
<p>To install on Debian/Ubuntu:</p>
<pre><code>sudo apt-get install apache2-utils
</code></pre>
<p>To Install on RHEL/Centos/Fedora</p>
<pre><code>sudo yum install httpd-tools
</code></pre>
<h3 id="usage">Usage:</h3>
<pre><code>Usage: ab [options] [http[s]://]hostname[:port]/path
Options are:
    -n requests     Number of requests to perform
    -c concurrency  Number of multiple requests to make at a time
    -t timelimit    Seconds to max. to spend on benchmarking
                    This implies -n 50000
    -s timeout      Seconds to max. wait for each response
                    Default is 30 seconds
    -b windowsize   Size of TCP send/receive buffer, in bytes
    -B address      Address to bind to when making outgoing connections
    -p postfile     File containing data to POST. Remember also to set -T
    -u putfile      File containing data to PUT. Remember also to set -T
    -T content-type Content-type header to use for POST/PUT data, eg.
                    'application/x-www-form-urlencoded'
                    Default is 'text/plain'
    -v verbosity    How much troubleshooting info to print
    -w              Print out results in HTML tables
    -i              Use HEAD instead of GET
    -x attributes   String to insert as table attributes
    -y attributes   String to insert as tr attributes
    -z attributes   String to insert as td or th attributes
    -C attribute    Add cookie, eg. 'Apache=1234'. (repeatable)
    -H attribute    Add Arbitrary header line, eg. 'Accept-Encoding: gzip'
                    Inserted after all normal header lines. (repeatable)
    -A attribute    Add Basic WWW Authentication, the attributes
                    are a colon separated username and password.
    -P attribute    Add Basic Proxy Authentication, the attributes
                    are a colon separated username and password.
    -X proxy:port   Proxyserver and port number to use
    -V              Print version number and exit
    -k              Use HTTP KeepAlive feature
    -d              Do not show percentiles served table.
    -S              Do not show confidence estimators and warnings.
    -q              Do not show progress when doing more than 150 requests
    -l              Accept variable document length (use this for dynamic pages)
    -g filename     Output collected data to gnuplot format file.
    -e filename     Output CSV file with percentages served
    -r              Don't exit on socket receive errors.
    -m method       Method name
    -h              Display usage information (this message)
    -Z ciphersuite  Specify SSL/TLS cipher suite (See openssl ciphers)
    -f protocol     Specify SSL/TLS protocol
                    (TLS1, TLS1.1, TLS1.2 or ALL)
</code></pre>
<h3 id="exampleusage">Example usage:</h3>
<pre><code> ab -c 1 -n 1000 http://example.com/
</code></pre>
<h1 id="puttingapachebenchtouse">Putting ApacheBench to use:</h1>
<p>The above section should be plenty to get you started, but lets look at a quick example of testing caching. Below I've setup a simple <code>flask</code> server example, which on request calculated a random Fibonacci number between <code>1</code> and <code>30</code>.</p>
<pre><code>#!/usr/bin/env python
#
# To start this server, you must have python and flask installed
# Start server: python testserver-fib.py
#
# To install flask use the pip line below:
# pip install Flask
# or visit: http://flask.pocoo.org/docs/0.12/installation/

from flask import Flask
import random
app = Flask(__name__)

# snagged from: http://stackoverflow.com/a/499245
def F(n):
    if n == 0: return 0
    elif n == 1: return 1
    else: return F(n-1)+F(n-2)

@app.route('/')
def hello_world():
    r = random.randint(1,30)
    fib = F(r)
    # ApacheBench expects constant output
    return 'fib({0:02}):{0:06}'.format(r,fib)

if __name__ == &quot;__main__&quot;:
    app.run(debug=True)
</code></pre>
<p>Now let's see how we do performance wise. We set the concurrency to 1 using <code>-c 1</code> and specify the number of requests to 500 by setting <code>-n 500</code>. Note that we are using the simple flask dev-server, which is single threaded.</p>
<pre><code>m@test:~$ ab -c 1 -n 500 http://127.0.0.1:5000/
-- snip --
Concurrency Level:      1
Time taken for tests:   17.821 seconds
Complete requests:      500
Failed requests:        0
Total transferred:      85000 bytes
HTML transferred:       7000 bytes
Requests per second:    28.06 [#/sec] (mean)
Time per request:       35.642 [ms] (mean)
Time per request:       35.642 [ms] (mean, across all concurrent requests)
Transfer rate:          4.66 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    0   0.0      0       0
Processing:     1   35  88.2      1     471
Waiting:        0   35  88.2      1     471
Total:          1   36  88.2      1     471

Percentage of the requests served within a certain time (ms)
  50%      1
  66%      5
  75%     16
  80%     27
  90%    107
  95%    277
  98%    447
  99%    461
 100%    471 (longest request)
</code></pre>
<p>In the above example, we see the average request time was <code>34ms</code>, the median was <code>1ms</code> and we had <code>28rps</code> (requests per second). What happens if instead of a single connection we have 10 concurrent connections (setting <code>-c 10</code>)?</p>
<pre><code>m@test:~$ ab -c 1 -n 500 http://127.0.0.1:5000/
-- snip --
Concurrency Level:      10
Time taken for tests:   18.579 seconds
Complete requests:      500
Failed requests:        0
Total transferred:      85000 bytes
HTML transferred:       7000 bytes
Requests per second:    26.91 [#/sec] (mean)
Time per request:       371.583 [ms] (mean)
Time per request:       37.158 [ms] (mean, across all concurrent requests)
Transfer rate:          4.47 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    0   0.0      0       0
Processing:     3  360 294.4    294    1470
Waiting:        2  360 294.4    294    1470
Total:          3  360 294.4    294    1470

Percentage of the requests served within a certain time (ms)
  50%    294
  66%    443
  75%    529
  80%    579
  90%    746
  95%   1018
  98%   1136
  99%   1160
 100%   1470 (longest request)
</code></pre>
<p>While the RPS remains very similar to before at <code>~27rps</code>, our response times have gone through the roof (mean of <code>371ms</code> and median of <code>294ms</code>)! Here we have a situation, where multiple parallel connections get serialized and processed one at a time, while the overall rate remains unchanged, the quality of service delivered to each client degrades by a factor roughly similar to the number of concurrent connections.</p>
<p>Let's see if we can do better. Since we repeatedly calculate the same 30 Fibonacci numbers, let's add some caching into the mix. Generally, if you have long-running requests that will always return an unchanging value, it is a good idea to cache these. With the caching in place, the first few requests will still have the same <em>slow</em> response time, but all of the following requests will benefit from the cache and therefore be as fast as our cache lookup. See the modified code below:</p>
<pre><code>#!/usr/bin/env python
#
# * To start this server, you must have python and flask installed
# * Copy this into a file named testserver-fib-cached.py
# * Start server: python testserver-fib-cached.py
#
# * To install flask use the pip line below:
#      `pip install Flask`
#   or visit: http://flask.pocoo.org/docs/0.12/installation/
from flask import Flask
import random
app = Flask(__name__)

cache = {}

# snagged from: http://stackoverflow.com/a/499245
def F(n):
    if n == 0: return 0
    elif n == 1: return 1
    else: return F(n-1)+F(n-2)

@app.route('/')
def hello_world():
    r = random.randint(1,30)
    if r in cache:
        print('hit')
        # ApacheBench expects constant output
        return 'Cache Hit!  fib({0:02}):{0:06}'.format(r,cache[r])
    else:
        fib = F(r)
        cache[r] = fib
        print('miss')
        # ApacheBench expects constant output
        return 'Cache Miss! fib({0:02}):{0:06}'.format(r,fib)

if __name__ == &quot;__main__&quot;:
    app.run(debug=True)
</code></pre>
<p>Now let's run our single connection, 500 request benchmark again:</p>
<pre><code>-- snip --
Concurrency Level:      1
Time taken for tests:   1.680 seconds
Complete requests:      500
Failed requests:        0
Total transferred:      91000 bytes
HTML transferred:       13000 bytes
Requests per second:    297.55 [#/sec] (mean)
Time per request:       3.361 [ms] (mean)
Time per request:       3.361 [ms] (mean, across all concurrent requests)
Transfer rate:          52.89 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    0   0.0      0       0
Processing:     1    3  27.7      1     497
Waiting:        0    3  27.7      1     497
Total:          1    3  27.7      1     497

Percentage of the requests served within a certain time (ms)
  50%      1
  66%      1
  75%      1
  80%      1
  90%      1
  95%      1
  98%      7
  99%     85
 100%    497 (longest request)
</code></pre>
<p>The results are quite impressive: the mean is down to <code>3.3ms</code>, the median down to <code>1ms</code> and the request rate is at <code>297rps</code>! That is 10x faster. Once the cache is initialized and our benchmark no longer includes the cache seeding time, we get even higher performance, which at this point is likely limited only by the cache-lookups. My local testing gets me up to <code>1100rps</code> with median and mean both less than<code>1ms</code>. While this is a simple example for demonstration, it is important to note that part of what we are seeing is a misleading flaw in how most load driving tools generate requests and record latencies, this is known as the <strong>coordinated-omission problem</strong>, but that is a topic for another day.</p>
<p>This concludes our short introduction into performance testing, but soon to follow will be more articles addressing more complex setups, benchmarking methods and types, metrics to be evaluating and considerations for repeatability.</p>
]]></content:encoded></item></channel></rss>