Strap in, we’re about to get technical. But that’s the kind of people we are, and we’re guessing you are too. Today we’re talking about API Test Automation, a topic which causes a frankly unexpected degree of passionate discussions here at ORFIUM.
When you start talking about Automated Tests you should always have in mind the Test Pyramid. Before implementing any test, you should first consider the Testing Layer that this test refers to. Keep in mind this amazing quote from The Practical Test Pyramid – Martin Fowler:
If a higher-level test spots an error and there’s no lower-level test failing, you need to write a lower-level test
Push your tests as far down the test pyramid as you can
In our QA team at ORFIUM, we work on implementing automated tests for the topmost layers of the Pyramid: API Tests → End to End Tests → UI tests and also perform Manual & Exploratory Testing. We mainly aim to have more API tests and fewer UI tests.
In this article, we will focus on API Test Automation, the framework we have chosen, our projects’ structure, and the test reporting. Plus a few tips & hints from our findings so far with this framework.
About Tavern API Testing
Tavern is a pytest plugin, command-line tool, and Python library for automated testing of APIs, with a simple, concise, and flexible YAML-based syntax. It’s very simple to get started, and highly customizable for complex tests. Tavern supports testing RESTful APIs, as well as MQTT-based APIs.
You can learn more about it on the Tavern official documentation and deep dive into Tavern.
The best way to use Tavern is with pytest. It comes with a pytest plugin, so all you have to do is install pytest and tavern, write your tests in test_tavern.yaml files, and run pytest. This means you get access to all of the pytest testing framework advantages, such as being able to:
- execute multiple tests simultaneously
- automatically discover your test files
- provide various cli options to run your tests with different configurations
- generate handy and pretty test reports
- organize your tests using markers to group them based on the system under test functionality
- use hooks and fixtures to trigger setup/teardown methods or test data ingestion
- use plenty of extremely helpful pytest plugins, such as pytest-xdist, pytest-allure, and more
Why Tavern?
Our engineering teams mainly use Python. It stands to reason that we also should build our Test Automation projects using Python. For the UI tests, we had been already familiar with Python Selenium and BDD frameworks, such as behave & pytest-bdd. For the API tests, we researched Python API testing frameworks and we were impressed by Tavern because:
- It is lightweight and compatible with pytest.
- The YAML format of test files offers a higher level of abstraction in the API tests and makes it easier to read and maintain.
- We can reuse testing stages within a test file so we can create an end-to-end test in a single yaml file.
- We can manipulate requests and responses using Python requests to do the required validations.
- It is integrated with allure-pytest, providing very helpful test reports with thorough details of requests and responses.
- It has a growing and active community.
CI pipelines using Jenkins & Docker
Jenkins project structure
In collaboration with our DevOps team, we have set up a virtual server (Amazon EC2 instance – Linux OS) with a Jenkins server version installed. The Jenkins server resides in a custom host, accessed only by our engineering team. On this server, we have created multiple Jenkins projects, one for each QA automation project. Every Jenkins project contains two CI pipelines – one for the API tests and one for the UI tests. The pipelines use the Declarative Pipeline syntax, and the build script path is passed directly from the source code management (Pipeline script from SCM → Github) reading the respective Jenkinsfile of our QA automation projects.
This means that each QA automation project includes a Jenkinsfile for each type of test – one for the UI tests CI & one for the API tests CI.
A typical Jenkinsfile for our API tests includes the following stages:
- Stage 1: Check out the Github repository for the respective QA automation project
- Stage 2: Check out the application’s (backend) Github repository.
- Stage 3: Run backend application by building the available docker-compose.yml along with a compose-network.yml that exists in the QA automation project. The latter compose file works as a network bridge between the two Docker containers. When these two compose files are built, a host: port is deployed within the Jenkins server and the application’s container is available in the respective port. (i)
- Stage 4: Build the automation project docker-compose.yml. When this stage starts, a Docker container is being built with the following command arguments pytest -n 8 –env ${APP_ENV_NAME} -m ${TAG}(ii) The ${APP_ENV_NAME} is passed as Jenkins parameter which has a default value the local env, hence the tests run against the host:port that was previously deployed. (iii)
- Stage 5: Copy the generated test reports from the QA Docker container to the Jenkins server using the docker cp command and publish the test results to the respective Google chat channel.
- Stage 6: Teardown both containers and clean up the Jenkins workspace.
Trigger Jenkins pipelines
Our Jenkins jobs can be triggered either manually, via the Jenkins GUI, or automatically, whenever a pull request is opened to our backend application Github repository. This is feasible via the Jenkins feature that allows triggering jobs remotely via a curl command. So we have added a script in our application’s deployment process which runs this curl command. As a result, whenever a pull request or a push event happens, our Jenkins CI job is triggered and runs our tests against the specific pull request environment. When this job is finished, our test reports are published to a test reporting channel. Finally, we have added a periodic test run that runs daily once in our staging environment.
i) A bridge between the two containers’ networks is required so that the automation project container can access the application’s container port. This is required because we also build the QA automation project using a docker-compose file so both the application and QA project are built locally and need access to the same network.
ii) -n 8 uses pytest-xdist plugin enables the test run parallelization: if you have multiple CPUs or hosts you can use those for a parallel test run, significantly reducing the test execution time.
iii) wait-for-it was used as an entry point in the docker-compose.yml, since we have to wait a few seconds until the application’s container is up and running. For that purpose, a shell script was implemented doing the following:
#!/bin/bash
set -o errexit #Exit when the command returns a non-zero exit status (failed).
if [ $APP_ENV_NAME == 'local' ];then
wait-for-it --service $HOST --timeout 20
fi
exec "$@" # make the entrypoint a pass through that then runs the docker command
https://github.com/Orfium/qa-tutorial/blob/main/wait_for_local_server.sh
Why Docker?
Having Docker containers for both application and automation projects we ensure that:
- The tests are isolated and run smoothly anywhere, independent of OS, increasing mobility and portability
- We do not have to maintain a test environment, since containers are spawned and destroyed during the CI test run on Jenkins
- There is no need to clean up any test data in the test environment, since it is destroyed when the test run is completed
- We keep improving our skills in the Docker containers and CI world
Building API Test Automation projects with Tavern
Our test automation projects at Orfium follow a common architecture and structure. We started our journey in API test automation with a proof-of-concept (PoC) project for one of our products, trying Tavern for the first time. We continued working, sprint by sprint, adding new API tests and troubleshooting any issues we encountered. We managed to scale this PoC and reach an adequate number of API tests. Then, since the PoC was successful, we decided to apply this architecture in new test automation projects for other products too. Here’s the process we decided to use after we adopted our API test automation framework.
Structure
A new python project is created along with the respective python virtual environment. In the project root, we add the following:
- a requirements.txt file which includes the pip packages that we need to install in the virtual environment
- a Dockerfile that builds an image for our project
- a docker-compose.yml file that builds, creates, and starts a container for our tests, passing the pytest command with the desired arguments, and this way we run our tests “containerized”
- a compose-network.yml that is used as a bridge network between the test Docker container and the application’s container that we will see next.
- a Jenkinsfile that includes the declarative Jenkins script used for running our tests in the Jenkins server
- a data folder that contains any test data we may want to ingest via our API tests, e.g. csv, json, xlsx, png, jpg files
- a new directory named api_tests. Within this directory, we add the following sub-directories:
- tests: contains all the tavern.yaml files, e.g. test_login_tavern.yaml
- verifications: a python package that contains all the Python verification methods
- hooks: contains Tavern stages in yaml file format that are reused in multiple test_whatever.tavern.yaml files
- conftest.py: includes all the pytest fixtures and hooks we use in the api_tests directory scope
- pytest.ini: all the flags and markers we want to run along with the pytest command
- config: this folder contains the following configuration files:
- config.py file setting the environment handling classes, endpoints, and other useful stuff that are used across our API tests
- routes.yaml file which includes all the endpoints-paths of the application that we are going to test as key-value pairs.
- log_spec.yaml defining the Tavern logging which is very useful for debugging
Prerequisites
Tavern only supports Python 3.4 and up. You will also need to have virtualenv and virtualenvwrapper installed on your machine.
Install tavern in your virtual environment, assuming that virtualenvwrapper has been already installed and configured on your mac:
# on macOS / Linux with virtualenvwrapper
mkvirtualenv qa-<project-name>
cd ~/path-to-virtual-env/
# activate virtual env
workon qa-<project-name>
#install Tavern pip package
pip install tavern
# on Windows - with virtualenv
pip install virtualenv
virtualenv <virtual-env-name>
cd ~/path-to-virtual-env/
# activate virtual env
<virtual-env-name>\Scripts\activate
#install Tavern pip package
pip install tavern
Set up PYTHONPATH
To make sure that Tavern can find external functions, you need to make sure that it is in the Python path. Check out the Tavern docs section: Calling external functions
So, the paths api_tests/verifications & api_tests/tests (which contain the Python verification methods and the Tavern tests) have to be added to the PYTHONPATH. This PYTHONPATH setup can be handled either manually or automated as described below:
→ manually: In your .rc file (zshrc, bashrc, activate, etc.) add the following line:
# on macOS/Linux
export PYTHONPATH="$PYTHONPATH:/qa-<project-name>/tests/api_tests/tests"
# on Windows
set PYTHONPATH=%PYTHONPATH%;C:<path-to-qa-project>testsapi_teststests # or via the Advanced System Settings -> Environment Variables
→ via conftest file: Anything included in a conftest.py file is accessible across multiple files, since pytest will load the module and make it available for your tests. This is how you ensure that even if you did not set up the PYTHONPATH manually, this will be configured within your test as expected. You have to add the following in your conftest.py file:
from pathlib import Path, WindowsPath
path = Path(__file__)
if sys.platform == "win32":
path = WindowsPath(__file__)
INCLUDE_TESTS_IN_PYTHONPATH = path.parent / "tests"
INCLUDE_VERIFICATIONS_IN_PYTHONPATH = path.parent / "verifications"
try:
sys.path.index(INCLUDE_TESTS_IN_PYTHONPATH.as_posix())
sys.path.index(INCLUDE_VERIFICATIONS_IN_PYTHONPATH.as_posix())
except ValueError:
sys.path.append(INCLUDE_TESTS_IN_PYTHONPATH.as_posix())
sys.path.append(INCLUDE_VERIFICATIONS_IN_PYTHONPATH.as_posix())
Create your first test
Within the api_tests/tests folder, create your first test file naming it as test_<test_name> .tavern.yaml file.
Each test within this yaml file can have the following keys:
- test_name: Add the test name with a descriptive title
- includes: Specify other yaml files that this test will require in order to run
- marks: Add a tag name to the test that can be used upon test execution as a pytest marker
- stages: The key stages is a list of the stages that make up the test
- name A sub-key of stages that describe the stage scope.
- request: A sub-key of name key that will include the related to the request keys
- url – a string, including the protocol, of the address of the server that will be queried
- json – a mapping of (possibly nested) key: value pairs/lists that will be converted to JSON and sent as the request body.
- params – a mapping of key: value pairs that will go into the query parameters.
- data – Either a mapping of key-value pairs that will go into the body as application/x-www-url-form-encoded data or a string that will be sent by itself (with no content-type).
- headers – a mapping of key: value pairs that will go into the headers. Defaults to adding a content-type: application/json header.
- method – one of GET, POST, PUT, DELETE, PATCH, OPTIONS, or HEAD. Defaults to GET if not defined
- response: A subkey of name that will include the response keys
- status_code – an integer corresponding to the status code that we expect, or a list of status codes, if you are expecting one of a few status codes. Defaults to 200 if not defined.
- json – Assuming the response is json, check the body against the values given. Expects a mapping (possibly nested) key: value pairs/lists. This can also use an external check function, described further down.
- headers – a mapping of key: value pairs that will be checked against the headers.
- redirect_query_params – Checks the query parameters of a redirect url passed in the location header (if one is returned). Expects a mapping of key: value pairs. This can be useful for testing the implementation of an OpenID connect provider, where information about the request may be returned in redirect query parameters.
- verify_response_with: Use an external function to verify the response
Set up the test environment
In our SDLC we run the tests against multiple environments: local environment → pull request environments → staging environment. For that purpose, we have created a Python class named AppEnvSettings that handles the different environments dynamically and passes the application’s environment name as a CLI option in the pytest command. To be more specific, we have added the following pytest fixture within the api_tests/conftest.py file along with the respective pytest_addoption method.
This fixture is used whenever a pytest session starts and invokes the aforementioned class, passing the environment as an argument via a CLI option. So, if we want to trigger our tests in a specific environment we run pytest –env staging.
Since Tavern works perfectly with pytest, the app_env_settings fixture is automatically used from test_tavern.yaml files and you can use the constructed api_url within your tavern tests this way:
stages:
- name: User is able to login with valid registration details
request:
url: "{app_env_settings.api_url}{login_route}"
AppEnvSettings class code -> config.py example
Run the tests
cd api_tests/tests
pytest # or pytest <test_name>.tavern.yaml to run a specific test file
Generate Test reports
When a test run is finished, we have chosen the allure-pytest plugin to generate our test reports. The generated reports are quite handy and help us identify failures easily. It is alsowell integrated with Tavern, so we see the test reports in a detailed view per Tavern stage. This way, we are able to monitor any failures while also being able to check the requests and responses for each test. In order to use that reporter, you have to install the allure-pytest package in your virtual environment and configure your pytest.ini file:
addopts=
# path to store the allure reports
--alluredir="../reports/allure-api-results"
Bottom line
Choosing Tavern as an API Testing Framework has been a great pick so far for us! We have not encountered any blockers at all as we scale our automation projects. We also utilize all the benefits of pytest upon test execution, such as pytest fixtures, parallelization with pytest-xdist, and many other features as they have been previously described. Tavern also allows us a higher level of abstraction, since the tavern.yaml files increase readability in our automation projects and provide very detailed test reports. We are able to easily identify which stage of the end-to-end test failed and what caused the failure. Which is the whole point.
Moreover, having different stages within the .tavern.yaml file allows us to create end-to-end tests that describe a business process and are not just API tests that validate the responses of each request. We can simulate the customer’s behavior using API calls and validate that our application will work as expected in business critical scenarios.
Finally, triggering our CI jobs automatically upon new pull & merge requests provides very helpful feedback during the Software Development Lifecycle (SDLC). We ensure that our regression test suites run successfully in staging and QA environments, and this way we increase our confidence that, when a new feature is released to production, it will not affect the existing functionality of our applications.
Resources
You can find more examples and details here → Tavern test examples
Konstantinos Konstantakopoulos
Staff QA engineer @ ORFIUM