Skip to content

Test Automation in Software Engineering

Test Automation in the context of software engineering is a technique where the tester writes scripts on their own and uses suitable software or automation tool to test the software. It is an automation process of a manual process. It allows for executing repetitive tasks without the intervention of a manual tester. It is used to automate the testing tasks that are difficult to perform manually.

Key Aspects of Test Automation

  • Automation Testing: Automation testing involves using software tools to execute pre-scripted tests on a software application before it is released into production. This process helps in identifying bugs, ensuring software quality, and reducing manual testing efforts, making it a key component in continuous integration and continuous delivery pipelines.

  • Goal of Automation Tests: The goal of automation tests is to reduce the number of test cases to be executed manually but not to eliminate manual testing. It is possible to record the test suit and replay it when required.

  • Efficiency of Automated Testing: Automated testing is more efficient for detecting bugs in comparison to manual testing. No human intervention is required and the test cases can be executed unattended. Automation testing ensures more test coverage in comparison to manual testing where it is not possible to achieve 100% test coverage. Testing can be done frequently thus improving the overall quality of the software under test.

Tools for Test Automation

There are also some commonly used tools for Test Automation such as Selenium, TestComplete, Katalon Studio, UFT, SoapUI, Postman, Apache JMeter. These tools help in managing resources and anticipating testing-related challenges.

Review of test planning?

Test planning starts in the phase of defining the different features. Thinking about the necessary tests in advance is an important part of the development work and partly steers the implementation in the right direction. With the help of tests, we check whether the implementation meets the required requirements. Roughly speaking, the responsibility of the tester and the developer is to jointly think about the necessary tests and sufficient test coverage. It is good to link the test descriptions to the implementation, for example, as a link to User Story-issue. Where the different test cases are practically documented depends entirely on the operating environment. In the Fall 2021 Future Factory implementation, the storage location is by default in the Testlink service. An alternative is to record them in the Acceptance Criteria section.

In the initial phase of test planning, we consider what needs to be tested and how to perform the test manually. Due to the development of different software production tools, it is possible to transfer the execution of tests to so-called on automation tools (e.g. Robot Framework, Playwright, Test Cafe, Cypress etc.). These different automation tools do not eliminate the need to initially define what the test should be like.

Towards test automation

When the manual test cases have been planned and the testing process itself works manually, an important step has been achieved. Challenging in the model of traditional testing can be the need to constantly re-execute certain test cases. These extremely necessary tests are often the so-called regression tests, which are used to check whether the software is still functional after the software modifications/changes made to it. In practice, this inspection work requires constant re-execution of tests, especially after changes made to the basic features of the software. An example of this is bug fixes, where the previous implementation can be decisively changed. At worst, the correction made may break some other parts of the software. Regression testing is an area of ​​testing where there is a very high need for test automation. The goal of automating regression testing is to save testing resources and transfer repetitive testing work to automation

Selected tests can be included in regression testing, for example

  • functional tests (Functional testing)
  • wider acceptance tests (Acceptance Testing)
  • Service performance tests (Non Functional - Performance testing)
  • Information security tests of the service. (Non Functional - Security testing)

Introducing test automation is not straightforward but requires a bit of groundwork. Testing automation requires an investment from product development in the early stages to implement a suitable operating model. Performing test automation requires different thinking than performing manual testing. The observation of the performance of the tests performed by a human is transferred to the responsibility of the test automation solution (e.g. Robot Framework).

Contriboard service test automation

Automated acceptance testing

Let's look at the test automation systems of the previously developed Contriboard service as an example. It is worth noting the unusual implementation of the acceptance test case used as an example. It is the so-called "soap opera test case", i.e. its goal is to strongly describe what is checked during the test and why these checks are performed. It is therefore an acceptance test in which a wider test scenario is performed. A suitable set of these tests can be defined, of which there are 4 in the case of the Contriboard service. Examples of test cases can be found on the Testlink service and documented test cases on the GitHub service.

Test automation and Robot Frame tool

Traditionally, test cases have been described in different documents and they have been managed with e.g. Testlink, Excel tools. Agile development has contributed to reducing the production of extra documentation, and because of this, test automation is a tool itself as well as a possible way to document test cases. This is the idea in Maaret Pyhäjärvi's introduction to test automation:

Robot Framework to help with testing

Previous materials

Here is a compilation of previous materials using an older version of Robot Framwork

Check out the following materials:

  • https://bitbucket.org/robotframework/webdemo
  • http://robotframework.org/Selenium2Library/Selenium2Library.html
  • http://robotframework.org/robotframework/latest/RobotFrameworkUserGuide.html
  • Paavon luentokalvot

Newer material

Robot Framwork's interface to the test object has developed so-called Clearer with Browserless. Now you don't need the previously used separate browser-specific library, but you can directly process the test object without a browser. It's worth getting to know!

https://robotframework-browser.org/

Instructions for automating test cases

Think about the test cases you planned earlier, could you automate suitable ones?

  • Are there broader scenarios that can be classified as acceptance tests among the tests you have planned?
  • Have you defined functional tests related to features (Functional Test)

Instructions for writing tests:

Create your own repository to support automation, which will be reserved for the use of scripts needed for test automation. As an example https://github.com/N4SJAMK/teamboard-test

  • Use the resource file
  • Use at least a couple of variables
  • Use at least a couple of pre-defined higher level keywords
  • For tests Setup and Teardown!
  • Also write sufficient documentation in the test files
  • The tests and their steps are cleverly named; so clearly that no test-specific documentation is needed
  • One own file for tests
  • You don't have to pass all the tests. For example: 3 PASS and 2 FAIL

https://testautomationu.applitools.com/learningpaths.html