What is performance testing?

According to the ISTQB (International Software Testing and Qualification Board) performance testing is “Testing to determine the performance of a software product.”

According to Wikipedia: “… a testing practice performed to determine how a system performs in terms of responsiveness and stability under a particular workload. It can also serve to investigate, measure, validate or verify other quality attributes of the system, such as scalability, reliability and resource usage.”

So, this is such a non functional testing process that deals with the performance of the application (software, i.e. a Magento online store) and can be used throughout the development process for detecting and preventing problems.

In my opinion, Wikipedia’s definition is a closer hit, however, giving a perfect definition, just like in many other cases, is impossible.

Why is performance testing necessary?

Today Internet penetration is globally widespread so more and more people buy products online on ecommerce stores. A lot of customers, instead of visiting “normal” shops, trying physical products and asking for information from the staff, look for products online, compare and examine them like “shopping experts” checking out various e-stores around the world to find the items that best suit their needs. As holidays and special deals (e.g. Christmas or Black Friday) are approaching, more and more customers are visiting online stores rather than their brick-and-mortar counterparts.

Your store has to stand the increased or continuously heavy traffic, therefore performance testing is crucial.

Your ecommerce store is made up of different components, it has software, hardware and network related parts, which can be broken down into sub sections, such as system framework, processor performance of the server or data quantity of the downloaded page. If you do not pay attention to all this, your e-store can slow down in case of an increased traffic period, or even can be inaccessible or, in the worst case, can crash completely.

Types of PET Below, we take a look at the different kinds of relevant performance testing, highlighting the most important types of testing in case of online stores. (For further information, I recommend reading Wikipedia’s Software testing article.) 1. Load testing: This is the simplest form of performance testing, since we measure how the application behaves under normal or higher load. a) Endurance testing: It measures system operation under continuous load for a longer period of time, so the errors that may stay hidden after a few hours’ testing, can be revealed after several days of testing. 2. Stress testing: Ideally, it is used for identifying the upper limits of capacity and the breakpoints. This type of testing helps determining the system’s robustness under extreme load and helps the administrators identify the ideal and maximum scope of operation. a) Capacity testing: With stress testing, we measure the amount of queries/actions/users the system can handle simultaneously without faults. 3. Soak testing: Soak testing or endurance testing is simulating a normal system operation where we examine how well the system can sustain a continuous, normal load. During the test, memory utilization also has to be kept an eye on so that memory leak issues can be detected. 4. Spike testing: We give a sudden burst of load to the system and reduce the load just as fast. This quick change is shown as a spike in the load chart, hence its name. 5. Configuration testing: You may ask what this kind of testing has to do with performance testing. It is useful because it examines how configuration settings influence the whole system or parts of the system, primarily in terms of performance. 6. Isolation testing: It is not a unique type of testing to performance testing. It is a repetitive test which detects a system error. These tests can often isolate the fault domain and environment.

Steps of testing

Identifying and creating the test environment Identifying acceptance criteria Planning tests (writing testing scenarios) Configuring the test environment (data upload, setting parameters) Implementing tests Running tests Evaluating results, making reports, retesting

Tsung (IDX-Tsunami 1.6.0)

Tsung is a distributed load and stress testing tool. It is protocol dependant and can be run on servers using the following protocol communication solutions:

The main advantage of Tsung is that it can simulate traffic with a great number of visitors from a single computer. If we use it on a set of computers (cluster), it performs remarkably well while offering easy configuration and maintenance.

Characteristics

High performance

Distributed

Multi-protocols

SSL support

Allocation of several IP addresses on a single machine

Monitoring the operating system during testing

XML configuration system

Dynamic scenarios (transactions)

Mixed user behaviours (work processes or sessions)

Stochastic processes (thinktimes)

What is Erlang and why is it important?

Tsung is developed in Erlang, which makes Tsung very competent since it is a concurrency oriented programming language. Erlang OTP (Open Transaction Platform) serves as the basis of Tsung offering the following features:

Performance

Scalability

Fault tolerance

Protocols and performance

Tsung is capable of reaching high performance if provided with an appropriate background. This means the following in numbers:

Jabber/XMPP protocol: 90,000 simultaneous Jabber users on a 4-node Tsung cluster 10,000 simultaneous users. Tsung was running on a 3-computer cluster (800MHz CPU).



HTTP and HTTPS protocol: 12,000 simultaneous users. Tsung were running on a 4-computer cluster (in 2003). 3000 requests/seconds . 10,000,000 simultaneous users: Tsung with a 75-computer cluster, more than 1,000,000 queries/second .



How to use Tsung

First you need to install Tsung to your server for which the service of Amazon EC2 – Virtual Server Hosting is a convenient solution.

Installing Tsung

Now we take a look at how to install Tsung. VPS server configuration is the following:

CentOS 6.7 operating system

CPU: 8 core (Intel(R) Xeon(R) CPU E5-2630 v2 @ 2.60GHz)

Memory: 3 GB

Storage: 10 GB HDD

First, you need to install Erlang and Firefox since reports are generated through them.

[[email protected] ~]# yum -y install erlang perl perl-RRD-Simple.noarch perl-Log-Log4perl-RRDs.noarch gnuplot perl-Template-Toolkit firefox

Then download and install Tsung:

For the Tsung report generation command create a pre-set command (alias command) for easier use (using VIM):

[[email protected] ~]# vim ~/.bashrc vim > alias treport="/usr/lib/tsung/bin/tsung_stats.pl; firefox report.html" vim > :w vim > :q [[email protected] ~]# source ~/.bashrc

Prepare Tsung for the first use (optional):

Configuring Tsung (/root/.tsung/tsung.xml)

The first step of the configuration file is relatively fixed, but processes and monitoring can be well managed with the help of workflows and transactions.

Let’s take a look at this file as an example, examples/http_simple.xml:

<?xml version="1.0"?> <!DOCTYPE tsung SYSTEM "/usr/share/tsung/tsung-1.0.dtd"> <tsung loglevel="notice" version="1.0"> <!-- Client side setup --> <clients> <client host="localhost" use_controller_vm="true"/> </clients> <!-- Server side setup --> <servers> <server host="195.56.150.103" port="80" type="tcp"></server> </servers> <!-- to start os monitoring (cpu, network, memory) --> <monitoring> <monitor host="195.56.150.103" type="snmp"></monitor> </monitoring> <load> <!-- several arrival phases can be set: for each phase, you can set the mean inter-arrival time between new clients and the phase duration --> <arrivalphase phase="1" duration="10" unit="minute"> <users interarrival="2" unit="second"></users> </arrivalphase> </load> <options> <option type="ts_http" name="user_agent"> <user_agent probability="80">Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.7.8) Gecko/20050513 Galeon/1.3.21</user_agent> <user_agent probability="20">Mozilla/5.0 (Windows; U; Windows NT 5.2; fr-FR; rv:1.7.8) Gecko/20050511 Firefox/1.0.4</user_agent> </option> </options> <!-- start a session for a http user. the probability is the frequency of this type os session. The sum of all session's probabilities must be 100 --> <sessions> <session name="http-example" probability="100" type="ts_http"> <!-- full url with server name, this overrides the "server" config value --> <request> <http url="/" method="GET" version="1.1"></http> </request> <request> <http url="/images/accueil1.gif" method="GET" version="1.1" if_modified_since="Fri, 14 Nov 2003 02:43:31 GMT"></http> </request> <request> <http url="/images/accueil2.gif" method="GET" version="1.1" if_modified_since="Fri, 14 Nov 2003 02:43:31 GMT"></http> </request> <request> <http url="/images/accueil3.gif" method="GET" version="1.1" if_modified_since="Fri, 14 Nov 2003 02:43:31 GMT"></http> </request> <thinktime value="20" random="true"></thinktime> <request> <http url="/index.en.html" method="GET" version="1.1"></http> </request> </session> </sessions> </tsung>

In the above example, Tsung, installed on the server, initiates clients from localhost whose destination is port 80 of the server with the IP address 195.56.150.103. For monitoring the server, we use SNMP protocol and a Tsung client.

In the <load> section you can see that the test comprises one section which runs for 10 minutes and creates a user every 2 seconds.

The created users identify themselves as browsers in a 20-80% proportion according to the browser defined in the <options> section.

The section, describing the <sessions> work processes, defines the interaction of the users, which includes the following steps:

Accessing the 56.150.103 opening page (HTTP GET) Checking on accessing the 56.150.103/images/accueil1.gif image, if it has been modified since the given time. Checking on accessing the 56.150.103/images/accueil2.gif image, if it has been modified since the given time. Checking on accessing the 56.150.103/images/accueil3.gif image, if it has been modified since the given time. Then the user is waiting for 0-20 minutes set on a random basis. Loading the 56.150.103/index.en.html page.

Tsung configuration options

Beyond the possibilities mentioned in the XML example, you have further options to customize your performance testing. The list below includes such options, for further details please study the configuration XML documentation of Tsung.

Upper limit of number of users (maxusers)

Users to be created dynamically or in a static way

Defining the maximum runtime of sections

Setting “thinking time” of users, random and hibernation settings

Setting time-out value for connection

Number of retries if connection is not re-established

Option for HTTP, LDAP authentication

Running MySQL queries

Changeable work process types

Loading and processing external files (CSV)

Using dynamic variables (JSONPath, Regexp, XPath, PostgreSQL)

Implementing iterations (for, repeat, if, foreach)

Parametering and running Tsung

Running Tsung is relatively simple. I definitely recommend using the Screen application so that the test runs even if VPS connection is lost.

Let’s take a look at the built-in helper, which, I believe, does not need further explanation:

[[email protected] ~]# $ tsung -h Usage: tsung <options> start|stop|debug|status Options: -f <file> set configuration file (default is ~/.tsung/tsung.xml) (use - for standard input) -l <logdir> set log directory (default is ~/.tsung/log/YYYYMMDD-HHMM/) -i <id> set controller id (default is empty) -r <command> set remote connector (default is ssh) -s enable erlang smp on client nodes -p <max> set maximum erlang processes per vm (default is 250000) -m <file> write monitoring output on this file (default is tsung.log) (use - for standard output) -F use long names (FQDN) for erlang nodes -w warmup delay (default is 10 sec) -v print version information and exit -6 use IPv6 for Tsung internal communications -h display this help and exit

Running Tsung from the /root/.tsung/ directory if the configuration file, simple_website_check.xml, is the following:

[[email protected] ~]# screen tsung -f simple_website_check.xml start

Tsung will run in a separate task after executing the command if the xml configuration file features the appropriate syntax. Tsung carries out validation before running, if it finds an error, it interrupts running and provides troubleshooting information. The log files, generated during the test, are placed in the ~/.tsung/log/YYYYMMDD-HHMM directory by default, but it can be changed by using the -1 parameter.

During the run phase

With version 1.6 and up, we are provided with a direct web-based monitoring option (dashboard) while running the test, which is of enormous help in terms of seeing how the testing process evolves. Thus we have the chance to interfere in time if testing starts showing extreme results or if the system is about to crash.

The dashboard can be accessed on the Tsung server via port 8091 with the following parameters:

{tsung szerver domain/ip}:8091/es/ts_web:status

After the run phase: reporting

When our test has been run, we can find the log files (.log), the XML configuration file and the copies of attached csv files in the specified or default folder. After entering the folder and executing the treport command, which we have previously created, we can generate the HTML report containing the test results which we can view in a browser by opening the report.html file.

Before the report file is created, only the log files and the GTML page of the dashboard can be found in the folder assigned to logs:

Structure of reports

Just open the report in a browser and you will immediately see the results of the performance testing. The report is detailed enough to determine the breakpoints of the system, the inappropriate processes or their weak points. Thanks to these, we can refactor the code, scale the server or develop the application to make it stronger or faster.

The menu on the right can be divided into two major sections. The upper contains statistical figures:

Main statistics

Transactions : summary of transactions

: summary of transactions Network Throughput : throughput capacity of the network (speed / amount of data)

: throughput capacity of the network (speed / amount of data) Counters : users, successful connection, phases run etc.

: users, successful connection, phases run etc. Server monitoring : result of monitoring

: result of monitoring HTTP status : HTTP status codes

: HTTP status codes Errors: errors found

In the second part, we can see the graphs, however, I regard this feature as rather basic.

Graphs

Response times : Change in response times during testing.

: Change in response times during testing. Throughput graphs : Change in network load during testing.

: Change in network load during testing. Simultaneous Users : Behaviour of users simulated throughout the test.

: Behaviour of users simulated throughout the test. Server monitoring : Usage of operating system CPU and memory.

: Usage of operating system CPU and memory. HTTP status : Status of HTTP response codes during testing.

: Status of HTTP response codes during testing. Errors: Errors detected during testing.