BEST PRACTICES —> Part 4: Software Quality Assurance


You’ve defined your landscape and destination. Your map has been drawn. You have embarked on your journey and you think that you are at your destination, but how do you know when you’ve arrived with a fully functional product? It’s important to conduct testability analysis early in the product life to avoid arriving at your destination with a defective product. Identifying potential defects through analysis, specifically during requirements definition, is much less expensive than if a bug is found later in the development cycle, after the bulk of the code has been written.


What is testability? Testability is the level of difficulty to execute the testing of a specific component or product based on the known/unknown state of the component or product. Will the firmware/software be used internally or will it be delivered to a customer? Are there clear instructions in the application to promote the correct workflow and process sequences? A strong start supports a strong finish. Good requirements are defined to facilitate testing, which ensures that high quality code is written and incorporated into the code base.

Conducting testability analysis in the planning stages of the product can minimize the number of defects found later in the development cycle. Testability analysis is another way that the firmware/software development team at SIGMADESIGN builds quality into our products from the very beginning.


Where do you start when defining testing goals? Testing goals are directly tied to requirements. Undocumented requirements is a common cause of miscommunication, which leads to changes, bugs, and rework. This can often cause missed deadlines and budget overruns. Details about defining firmware/software requirements are in Part 1: Requirements: Defining the Landscape. Every time you write a new requirement, you must ask yourself, “How will successful implementation of this requirement be verified?” A successful implementation is greatly aided by strong requirements.

Once these questions about requirements can be answered in a satisfactory manner, it is possible to lay the groundwork for the identification and definition of overall testing goals. It is highly recommended to introduce QA engineers during the stage of the project at which the requirements are being defined. It is equally important to keep the QA engineers in the loop during the development process. Changes to development requirements lead to changes to testing requirements.


A user interface (UI) is defined as the interaction between a user and a computer, website, or application. How do we improve UIs to be more testable? We can start by designing a UI with the user in mind. Who will interact with this application? How will this application be used? Think about all of the possible workflows that the user might need to follow. It is important to test all of the workflow possibilities.

The answers to these questions and possible use cases can be used to create a relevant UI. The design must be clear and consistent. The UI must be easy for all levels of users to navigate. It must include observable states and clearly defined inputs, outputs, and limitations. The control behavior must also be well defined. A relevant UI gives access to important resources and supports event logging.

        Consistent design

The placement and formatting of UI elements (buttons, text fields, font, etc.) must be consistent. This helps with test case development and test automation development. The use of abbreviations on button names and field names must be avoided whenever possible. Abbreviations, especially uncommon or arbitrary ones, can cause confusion. The overall design of a screen in the UI must be clean and concise. Overloaded (busy) screens are difficult to use. Too many elements can make testing problematic, and bugs will be more difficult to find.

        Observable states

The current state of the product must be observable. This includes elements such as battery levels, the connection status, or error/warning states. The current status should be available when the UI is performing a long task, for example, a firmware update. The end result should be visible in the UI after a test or task has been performed. Indicators such as status colors for pass/fail or displayed text that shows the end result are very useful.

        Clearly defined input/output and limitations

The types of inputs, such as import/export, must be clearly defined in the requirements. Input limitations must also be defined in the requirements. These limitations indicate where the test boundaries are to the test engineers.

        Clearly Defined Control Behavior

Control behavior must be clearly defined in the requirements. Examples of controls include slider bars, buttons, text boxes, counters, etc.

        Access to Important Resources that Are Related to the Project

Source code is useful for the development of automated testing. Sequence diagrams and flowcharts are beneficial for the development of test cases and executing the correct workflow. This also shows the relationship between the elements within the UI.

        Event Logging

This functionality is very helpful to test engineers who are verifying expected vs. abnormal product behaviors. Event logs keep track of the workflow stages in case it is necessary to replicate a bug in the future.

Following these guidelines will result in a really solid and reliable UI. Think about how the application(s) will be used to find all possible use cases in which to root out any bugs.


What are the benefits of automated testing? Any test procedure that is highly repetitive and has little variation is a good candidate for automation. Automated tests save time, can be thorough, and often produce rapid results. This kind of testing can be very efficient and reliable. Automated tests are repeatable and can be tailored to uncover specific issues. Repetition is easier for machines than it is for humans. Automated testing is a very useful development aid and can be used during code review. Tests can be configured to verify that specific requirements are met.

What can be automated? UI testing, particularly repetitive and complex UI and mobile application navigation testing, lends itself to automation. Regressive bug testing, especially through multiple versions of the firmware/software or use cases can be automated and that frees QA engineers to focus on other tasks. Firmware unit tests can be automated to ensure that code segments function in accordance with requirements. API testing is another good candidate for automation. There is great value in developing applications that generate API calls. Embedded tests, such as remote procedure calls (RPCs) for reliability and FATs, are naturals for automation. FATs can be automated and precisely configured to ensure that new products function as intended and meet or exceed acceptance criteria. There are many tools that are available for automated testing. A few of them are:

  • Google Test framework (C++),
  • xUnit framework (e.g. JUnit),
  • Katalon Studios,
  • Custom SW.

Tools and support equipment are essential to effective testing. There are numerous tools for testing firmware/software. Sometimes it is necessary to develop specialty tools to test a new product. Some tools and test equipment include:

  • Bluetooth traffic sniffer,
  • Network traffic sniffer,
  • Battery simulator,
  • Custom software and/or hardware.

Resources for good testing are very similar to those for writing good requirements for firmware/software development. Understanding use cases and requirements leads to understanding the intended use of the deliverable, which leads to tests that reflect the behavior of the user. The correct resources must be allocated to support the intent of the testing. Real life data testing focuses on tests that reflect real life. Context testing helps to resolve ambiguity. Negative tests are essential to determine what a user should not be able to do.

Test environments can be as important as the testing itself. Simulations are good for rapid development. Partial hardware testing is modular and might be specific to a particular product or feature set before integration. Full hardware/on target testing includes full integration of features. Beyond target testing is the testing of the final product in a real life environment.


How do you know when your development journey is over and you’ve arrived with a fully functional product? The SIGMADESIGN firmware/software development team conducts testability analysis during the planning stages of a project. This saves time and money and helps to prevent the occurrence of bugs. We are constantly on the lookout for ways to improve our processes. At SIGMADESIGN, we do all of the necessary testing in the appropriate test environments with the right resources, including automated testing, to verify that the delivered product meets or exceeds requirements and acceptance criteria.

Here is the Introduction to Firmware and Software Best Practices

You can read here: Part 1: Requirements of the Firmware & Software Best Practices

Read Part 2 here: Part 2: Software Architecture | Creating the Map

Read Part 3 here: Part 3: Software Code Reviews