A constant challenge for the Quality Assurance Specialist is to remove the notion that testing is a financial drain and time bottleneck that is a threat to product delivery. My counter-argument has always been that post-release bug fixes from poorly or even untested software are more expensive and also reputation damaging. Properly planned and executed testing processes do not cause delays or excess costs.
This post describes the Continuous Integration/Continuous Testing practice adopted by the Unleashed Technologies team. Adjustments, of course, will have to be made as the realities of providing software to our clients dictate; but for now, our team is opting to develop and test in parallel while staying on time and on budget.
Instead of the usual practice of coding a list of features for every sprint, a prioritized list of key features is recorded in a TESTME.md file. This simple list of key features may be accompanied by a few essential user stories and focuses both the Developer and Tester on the simultaneous coding and test creation for each feature by importance as they descend in rank.
The following is an example of a key features list:
ABC Company is a multi-site, multi-region & multi-language Drupal 8 project.
I would like to highlight that this approach is for both new and existing projects. The job of the QA Specialist in this phase is to concentrate on writing the project's Test Strategy, the Sprint Test Plan (usually no more than 2 pages in length and functional test cases).
After organizing and prioritizing a list of features for development in the TESTME.md file, the next step is to begin writing just one or two draft Behat tests based on the TESTME.md. These provide incentives to take subsequent steps toward configurations, which allow the tests to be run/debugged. Attention must be taken in the test suite configurations to verify that the tests, in fact, run.
These are draft tests that will probably fail at first. The point of behavior-driven testing is to facilitate communication between the project and clients by the careful observation of the results produced by the steps created in the Gherkin language. In essence, this is Test-Driven Development, since tests (or draft tests) were defined as early as possible in the development process. Draft tests (e.g. work-in-progress tests tagged with @wip) help define initial business logic described by the project stakeholder. As the project matures, draft tests are refined if necessary -- eventually becoming fully executable tests. The fully executable tests confirm the project meets/exceeds stakeholder-described business requirements.
The advantages of unit testing are many, but among these are improvements in the quality of the code by finding bugs early, providing documentation of the code details, and helping create a debugging process.
Depending on the project, the Developer now proceeds to the API after the execution of Unit and Integration tests. API tests are done by using tools such as Postman or soapUI.
The QA Specialist by this point has had the time and opportunity to review the Creative Briefs, Requirements, Design documents as well as the TESTME.md prioritized list. The Sprint has also been planned and features for development and testing have been defined, thus giving the QA Specialist time to create tests for the sprint and subject them to informal review to get more ideas on test priorities and general tips and how and what to test in particular. It can now be seen that Development and Testing are done in parallel and work for the mutual benefit of each.
End-to-End tests are also created, reviewed and executed to test how well the newly-developed features work with other features. The test results are collected and made available to everyone in the project and also to the Client. An important point in testing is transparency, and value is added to the project when weak points and unstable areas have been found and fixed. The purpose of test results is to show initial unstable areas that have been fixed, what kind of issues were found, and how the fixes show that the issues no longer exist in the applications under test. Screenshots show the final expected results of the tests.
Once it is confirmed that the functional tests have passed, a Regression set is created and automated with either the Behat tests mentioned above or via a suite of tests created in Codeception. Experience has shown that software features that have previously passed and functioned reliably in production can fail after the inclusion of new software features. It is at this point in which all tests can be executed in the Pipeline, in GitLab in our case.
Test results statistics aside, the best measure of improved software quality are happy clients whose sites need little, or more preferably, no post-release bug fixes. Both areas of DevOps Continuous Integration and QA Continuous Testing have been achieved: Developers have coded the software while in parallel QA has both verified and validated it to the Client's satisfaction.