Test Management Archives - Jama Software Jama Connect® #1 in Requirements Management Thu, 19 Sep 2024 18:06:32 +0000 en-US hourly 1 Conquering the Top Test Management Challenges in Product, Systems, and Software Development https://www.jamasoftware.com/blog/conquering-the-top-test-management-challenges-in-product-systems-and-software-development/ Thu, 19 Sep 2024 10:00:25 +0000 https://www.jamasoftware.com/?p=79125 This image shows a person climbing a mountain alongside gears to represent overcoming development challenges in product development, like test management.

Conquering the Top Test Management Challenges in Product, Systems, and Software Development

Effective test management is essential to deliver high-quality products, systems, and software on time and within budget. As development projects grow in complexity, managing the testing process becomes increasingly challenging. From coordinating teams to handling intricate data, test management can become a daunting task.

In this blog post, we’ll explore the top test management challenges and provide actionable strategies to conquer them.

1. Coordinating Cross-Functional Teams

  • The Challenge: One of the biggest challenges in test management is coordinating cross-functional teams. In modern development environments, testing often involves collaboration between developers, QA engineers, product managers, and sometimes even external stakeholders. Miscommunication or lack of alignment among these groups can lead to delays, errors, and ultimately, a product that doesn’t meet customer expectations.
  • The Solution: To overcome this challenge, establish clear communication channels and define roles and responsibilities early in the project. Implement regular stand-ups and meetings to ensure that everyone is on the same page. Additionally, using collaboration tools like Jira, Confluence, or Slack can streamline communication and keep everyone aligned. It’s also essential to foster a culture of collaboration where feedback is encouraged and acted upon.

“Jama Connect® covers all the needs regarding requirements management. If anyone requires a tool for requirements, tests, and traceability, Jama Connect is perfect for it.” – Software Test Manager, Software Test Manager, Industrial Conglomerates Company

2. Managing Test Data

  • The Challenge: Managing test data, particularly in complex systems or software development, is another significant challenge. Test data must be relevant, up-to-date, and secure, especially when dealing with sensitive information. Inadequate test data can lead to incomplete testing, which increases the risk of bugs and compromised quality in the final product.
  • The Solution: Invest in test data management tools – like TestRail – that allow you to create, maintain, and secure test data effectively. Mask sensitive information to comply with data protection regulations and ensure that test data is regularly updated to reflect real-world scenarios. Automating the generation and management of test data can also save time and reduce the potential for human error.

RELATED: Buyer’s Guide: Selecting a Requirements Management and Traceability Solution


3. Keeping Up with Rapid Development Cycles

  • The Challenge: In today’s fast-paced development environments, especially with the adoption of Agile and DevOps methodologies, testing teams often struggle to keep up with rapid development cycles. Continuous integration and continuous deployment (CI/CD) practices demand that testing be both thorough and fast, which can be a difficult balance to achieve.
  • The Solution: Automate as much of the testing process as possible. Automated testing tools can run tests quickly and consistently, allowing your team to keep pace with rapid development cycles. Prioritize test cases based on risk and impact to ensure that the most critical areas are tested first. Integrating automated tests into your CI/CD pipeline will help catch issues early, reducing the need for last-minute fixes.

“If working in Aerospace / Avionics engineering, Jama Connect is a solid option to handle requirements, elements of detailed design and Test artifacts. It also enhances cross-team collaboration through the Review Center, the Stream feature.” – Arthur Bouisson, Process Engineer, RUAG Real Estate

4. Handling Complex Test Environments

  • The Challenge: Test environments are often complex, involving multiple systems, configurations, and platforms. Setting up and maintaining these environments can be time-consuming and prone to errors. Moreover, inconsistent test environments can lead to false positives or missed defects.
  • The Solution: Leverage virtualization and containerization technologies, such as Docker or Kubernetes, to create consistent and reproducible test environments. These technologies allow you to simulate various environments and configurations with ease, ensuring that tests are conducted in conditions that closely mirror production. Additionally, maintain a detailed configuration management process to document and track changes in test environments.

5. Ensuring Comprehensive Test Coverage

  • The Challenge: Achieving comprehensive test coverage is a constant challenge. With the increasing complexity of products and software, it’s easy to overlook certain areas, leading to gaps in testing that could result in critical defects.
  • The Solution: Adopt a risk-based testing approach. Focus on areas of the product that are most critical or most likely to fail, and ensure these areas receive the most attention. Use code coverage tools to identify untested parts of your codebase and supplement manual testing with automated tests to expand coverage. Regularly review and update your test cases to reflect changes in the product or system.

“We know Jama Connect has improved our test coverage (>15%) and allowed for faster more comprehensive reviews. Interestingly, these reviews have found bugs or issues that were not uncovered by traditional directed and random testing.” – Jama Administation, Jama Administation, Internet Software & Services Company

6. Managing Test Automation Effectively

  • The Challenge: While test automation is a powerful tool for improving efficiency and coverage, managing it effectively presents its own set of challenges. Common issues include maintaining the test scripts, dealing with flaky tests, and ensuring that automation delivers the expected return on investment.
  • The Solution: Focus on building robust, maintainable test scripts by following best practices, such as modularizing your code and using descriptive naming conventions. Regularly review and update your automation suite to remove flaky tests and ensure that it continues to provide value. Finally, measure the effectiveness of your automation efforts through metrics like defect detection rates and test execution times, and adjust your strategy as needed.

RELATED: Jama Connect® – Test and Quality Management


7. Balancing Manual and Automated Testing

  • The Challenge: Finding the right balance between manual and automated testing is another common challenge. Over-reliance on one approach can lead to inefficiencies and missed defects.
  • The Solution: Develop a testing strategy that leverages the strengths of both manual and automated testing. Use automated testing for repetitive, time-consuming tasks, and manual testing for areas that require human judgment, such as user experience and exploratory testing. Regularly evaluate and adjust this balance as your project evolves and new testing needs arise.

“We screened three of the top requirements, risk, and test management tools and found Jama Connect scored much higher than the competitors. Jama Connect definitely meets our user needs.” – Principal Systems Engineer, Principal Systems Engineer, Health Care Providers & Services Company

8. Poorly Written or Incomplete Requirements

  • The Challenge: A testing suite can only be as good as the requirements being tested. Poor quality requirements or missing requirements = untrustworthy testing results and increased chances of defects. This is preventable and it’s much more expensive to catch issues with requirements by the time testing is happening. It’s far better to improve requirements quality earlier in the process.
  • The Solution: Educate team writing requirements on best practice frameworks (e.g. Easy Approach to Requirements Syntax – EARS). Review requirements for completeness and quality before building out test coverage. Make sure there’s collaboration between test writers/testers and requirements authors in case there are questions.

9. Undetected Impact of Changes

  • The Challenge: Changes happen, and no one likes to be blindsided. It can be challenging to accurately measure the impact of change and communicate to all impacted stakeholders. Not communicating changes to the appropriate stakeholders can lead to wasted resources on tests that don’t apply or need to be updated, delays, recalls, etc.
  • The Solution: Establish a change control process. Use a tool that helps you track and visualize the potential impact of changes across connected bodies of work, processes, and stakeholders. Review potential impact, discuss tradeoffs, and communicate with impacted stakeholders.

“Jama Software® is always looking for opportunities to improve its requirement management tool offering by adding new features and applications (e.g. Testing, Risk Management, V&V, SW application integration tools, etc.) – Jama Software listens to customer feedback for possible improvements to Jama Connect” – Director, Internet Software & Services Company

Conclusion

Test management is a critical component of successful product, systems, and software development. By addressing these common challenges with proactive strategies, you can improve the efficiency and effectiveness of your testing efforts. Clear communication, effective use of tools, and a balanced approach to testing will help you deliver high-quality products that meet both business objectives and customer expectations.

In the end, the key to conquering these challenges lies in continuous improvement. Regularly assess your testing processes, learn from past mistakes, and be willing to adapt to new tools and methodologies. With the right approach, even the most daunting test management challenges can be overcome.

a

Note: This article was drafted with the aid of AI. Additional content, edits for accuracy, and industry expertise by Ashley Ernst and McKenzie Jonsson.

]]>
2020s Predictions: Test Automation and Artificial Intelligence https://www.jamasoftware.com/blog/2020s-predictions-test-automation-artificial-intelligence/ Thu, 16 Jan 2020 21:18:55 +0000 https://www.jamasoftware.com/?p=36990 Test Automation and Artificial Intelligence

As we enter a new decade of technological advancements, Jama Software asked select thought leaders from various industries for the trends and events they foresee unfolding over the next 10 years.

In the fifth installment of our 2020s Predictions series, we’re featuring software testing predictions from Ricardo Camacho, Technical Product Marketing Manager at LDRA.

Jama Software: What are the biggest trends you’re seeing in software testing and how are they impacting product development?

Ricardo Camacho: I’m seeing a continued trend in the embracing of test automation. Not just to keep pace with the adoption and refinement in DevOps and Agile practices, but also due to increasing demands in software safety and security which is of great concern in today’s world. Each industry has different challenges, so different needs or focuses in test automation solutions are being sought.

One example of this is in the automotive industry, and particularly in the Advanced Driver-Assistance Systems (ADAS) – a prime example of a complex and evolutionary type of system.  Here you have development of advanced magnetics position sensor in the electronic power steering application, and there are other sensors like speed, inertial, and image which factor into that ecosystem.

These systems are also distributed systems and the components are being provided by different suppliers with different software stacks, using different software development methodologies made up of millions of lines of code. Test automation by way of adhering to a common coding standard for product development for all suppliers establishes a standard platform. In other words, a universal software development platform for vehicle software across all suppliers that address safety, security, and defect-free product goals. Not only have some automotive companies developed their own coding standards to enforce this, but we also see it by the movement and merger of MISRA and the AUTOSAR industry standards.

JS: Are there any technological advancements that you are seeing in software testing

RC: I’m seeing the emerging use of Artificial intelligence (AI) and machine learning for software testing, which continues to evolve and will make an enormous impact. Today, in most organizations, thousands of test cases are created, many are redundant, and some have defects. Also, test scripts are not intelligent enough to determine state conditions of the system under test, so sleep or wait instructions need to be added to properly fulfill testing needs. The interdependencies between test scripts further adds more complexity which tends to cause test failures and script changes. Furthermore, some testing continues to require human interaction, or visual inspection, which makes it error prone. So, AI is the next evolutionary step in software testing.

Artificial intelligence provides many efficiencies and fills many of the missing gaps in software testing. One of the biggest impacts will be through spidering. Where the AI will crawl the application, collecting data and even taking screen shots to learn about the expected behavior of the application. The AI can then compare future runs to known patterns, exposing deviations quickly. AI similarly addresses Application Program Interface (API) testing by recording and mocking responses which significantly reduces the time it takes to perform API testing. Additionally, AI is not limited to just text comparisons in validation. AI can validate all types of data (e.g. video, audio, images).

Therefore, with AI’s help, more robust and reliable test cases are produced and in less time. AI improves test coverage, accuracy, and provides superior insight into the health of the application. AI is bringing a transformation in software testing and it’s here in the horizon. Thus, 2020 will bring forward these types of needed solutions.

Learn how Jama Software is partnering with LDRA to deliver a test validation and verification solution for safety- and security-critical embedded software by watching this video.

]]>
Reducing Medical Device Risk with Usability Testing: The Why, the How, and the Who https://www.jamasoftware.com/blog/reducing-medical-device-risk-with-usability-testing/ Tue, 16 Jul 2019 17:36:04 +0000 https://www.jamasoftware.com/?p=33248

This is a guest post from Christopher Kim, M.D., Manager of Human Factors & Usability at the product development firm, Bresslergroup. It originally appeared on their blog. 
    Industry research shows that each year, approximately 400,000 hospitalized patients experience some kind of preventable harm.

    Medical device and equipment flaws caused by poorly researched design, mishandling, user error, and malfunction are common causes of medical errors. Many of these errors can be attributed to lack of standardization, poor design, poor maintenance, differences between devices from different manufacturers, and more.

    Devices recalled nationwide are typically taken to task for usability issues that could have been prevented with proper usability testing throughout a product’s development lifecycle. This means there’s a significant opportunity to reduce the potential for harm by conducting quality usability testing early and often.

    The Need for Usability Testing

    Digging further into causes of those medical device recalls, the top four, according to industry statistics, are:

    • Software issues
    • Mislabeling issues
    • Quality issues
    • Missing the mark with specifications

    By testing early enough in the product design lifecycle, life-threatening medical device errors can be prevented by the appropriate design changes. Unfortunately, testing is often done too late in the process when the design has already been “frozen.” Once you get to this point, use errors are often forced into mitigation by changing the label or altering the instructions-for-use (IFU). As the statistics show, these measures are frequently insufficient.

    So how can you prevent this by proactively starting a user-centered design process?

    1. Begin: Create a Human Factors Engineering and Usability Engineering Plan

    First, draft a strategic Human-Factors Engineering (HFE) and Usability Engineering (UE) Plan. Putting your users at the center of your product design process may seem intuitive, but once you get into the weeds of development, these user needs can get lost.

    It’s helpful to keep three questions in mind: Who are the users? What are the product’s intended uses? In what use environment is the product being deployed?

    From there, dive deeper:

    Who are the users?

    • Who are all the intended users?
    • Do any of these users interact only with specific aspects of the user interface? Could there be a subset of users who interact with only one aspect of the device? Could there be another clump of users who interact with another portion entirely, with very little interaction in between? For example, take software that’s been designed to help patients manage their sleep apnea and CPAP (continuous positive airway pressure) treatment while also allowing physicians and nurses to review their settings and compliance. A nurse, physician, and patient might all interact with the software in very different ways. Although the nurse and physician might have similar interactions, it’s important to determine whether or not certain actions are appropriate for the nurse or the physician, or both.

    What are the product’s intended uses?

    • What’s the intended use? How and in what circumstance should the product be used? What is the intended medical indication the product is meant to treat?
    • Are patients using the product to self-treat? Could there be any physical limitations that prevent a certain patient from interacting with your product as intended?
    • Is training required? What level of training is required, if necessary? (We highly recommend training be kept front of mind throughout the development process.)
    • What is the user interface of the product? This includes all points of interaction — physical and digital — between the user and the device.

    In what use environment is the product being deployed?

    • Is the product intended to be used exclusively in the home by a patient? In an emergency room or ambulance?
    • Does the product use differ, depending on the environment of use? Whether the user is in an adult ICU, a NICU, or in an ER trauma bay — each of these contributes to significant differences in the way a product is used.

    From a regulatory standpoint, the FDA will be looking to see that all elements of the users, uses, and use environments have been researched and incorporated into the plan for conducting proper usability testing in the relevant simulated-use environment with the intended users and uses.

    The ideal process might include multiple rounds of formative usability testing, which can be used to inform a Use Failure Mode and Effects Analysis (uFMEA). As you go through the cyclical design process — adding design inputs, testing them, refining, then adding more design inputs — you’ll eventually reach a point where you’re comfortable with the product you’ve created.

    A final production-equivalent medical device should be put through validation, or summative, testing, to assess whether or not the product can be deemed safe and effective to use.

    2. Assess: Create a Use Failure Mode and Effects Analysis (uFMEA)

    A uFMEA assessment is meant to identify components of the user interface and the impact of task failures. What are the possible failures that could occur? Why would these failures occur? What are the consequences of the failure and what’s the associated harm?

    Start by creating a step-by-step list of all tasks, required to accomplish the device’s end goals. (Tasks are defined as the action or set of actions performed by a user to achieve a specific goal.)

    The uFMEA also notes what could go wrong if any of these tasks are not completed correctly. To help determine this, go through a PCA (perception, cognition, action) analysis process. As your team goes about defining tasks, determining failure modes, and assigning levels of severity of harm, you may next want begin to categorize your tasks as critical versus non-critical. Critical tasks are those which, if performed incorrectly or not performed at all, could cause serious harm to the patient or user where harm is defined to include compromised medical care. User tasks are often tied to product requirements, and usability testing (formative and validation) will be needed to prove these user needs have been met.

    Finally, when developing your product’s uFMEA, make sure your team has thought through the level of mitigation that will, ultimately, be required by the IFU and labeling within the context of intended use. During this process, you may also come to a point where you recognize whether or not your product will require some level of training.

    Be diligent about identifying all potential use errors and outcomes before getting into validation or even pre-validation testing. Be as clear as possible about what could go wrong and what needs to go right to merit a task has been completed successfully. These success criteria will feed into a task analysis table that will define what needs to be tested from a usability standpoint.

    And finally, keep in mind that a uFMEA should be treated as a living, breathing document that evolves as the product progresses through the design process. Expect it to change as the device goes through rounds of formative usability testing, is put in the hands of users, and as the design team gains a better understanding of how the product can and should be used.

    3. Research: Use These Three Approaches To Usability Testing

    There are three usability testing approaches to consider:

    Rapid Insight Testing may be appropriate if all that’s needed is a quick touch-base to keep the design grounded in user needs. Roughly five to six participants are recommended, but it’s possible to test fewer in certain situations. This type of testing typically occurs prior to entering design control.

    We recommend conducting Rapid Insight Testing on all elements of the product, including human factors. Sample questions are: How did it feel to use or handle the product? Is the grip intuitive or did someone need to show me how to grip it? Are we testing all of the possible users of the product or are there others that could be seen as a potential user? This is your opportunity to evaluate all tasks, not just the critical ones. Additionally, this is your chance to get as much subjective feedback on the product as possible! What are their preferences? What are they used to seeing and what would they like to see changed?

    Formative Testing begins to inform the design process. Anywhere from five to ten representative users per user group are recommended and aim to simulate the use of the product in the simulated use environment with a high-fidelity prototype. We suggest conducting a minimum of two to three rounds of formative testing during a product development cycle. This is what’s going to lead to product design change mitigations, which is ideal — better now than later! Design changes are hugely preferable to changing labeling or instructions-for-use (IFU) late in the game.

    Validation Testing is required by regulatory bodies in the U.S. and Europe, and elsewhere around the world. At this stage, testing with a production-equivalent product is expected, and when dealing with FDA, 15 representative users per user group is required, as is representative simulated use in the simulated environment.

    It’s important to note that once you’ve reached validation testing, only critical user tasks should be evaluated, so it is important to evaluate all non-essential user tasks earlier in the process via rapid insight and formative testing.

    4. Comply: Resources and Tips for Regulatory Consistency

    A long discussion could be had when it comes to regulatory compliance, but here are a few resources, tips, and misconceptions to keep in mind.

    Resources for Regulatory Compliance

    The Association for the Advancement of Medical Instrumentation’s (AAMI) standard HE75:2009 provides general considerations and principles to help manage the overall risk of use error with best practice design elements and integrated solutions. This document acts as a comprehensive reference for how to incorporate human factors engineering into the medical device design process.

    As far as IEC 62366 goes, it guides the entire usability engineering process, including the elements of accompanying documentation and training. The main purpose of this document is to help define the human factors engineering and usability process as it pertains to medical device design, including consideration of risk management.

    The FDA guidance — 2016’s “Applying Human Factors and Usability Engineering to Medical Devices” — lays out the needs and expectations for your Human Factors/Usability Engineering Report.

    The FDA developed this guidance to “assist industry in following appropriate human factors and engineering processes to maximize the likelihood that new medical devices will be safe and effective for the intended users, uses, and use environments.” The Center for Devices and Radiological Health (CDRH) considers human factors testing a valuable component of product development for medical devices, so it is very important to be mindful of usability throughout the design process.

    You Say FDA, I Say IEC

    It’s good to know some of the key differences in expectations between the FDA (Food and Drug Administration — U.S.) and the IEC (International Electrotechnical Commission):

    • The FDA expects a single report — the HFE/UE Report — to hold all relevant information while the IEC 62366 requests similar information and data but does not require a single report.
    • The FDA expects to see conclusions at the beginning of the document. The IEC does not have these same expectations.
    • IEC 62366 uses the term “summative study,” but the FDA uses the term “validation”; the terms are interchangeable, which confuses those who are not familiar with the industry. With that in mind, note that “usability validation” is a very specific element you need to test for that is within an overall umbrella of design validation.
    • The FDA only accepts validation usability studies conducted in the United States; IEC does not state such requirements for location. Note, however, that any differences in the intended uses, users, and use environments between the U.S. and Europe and any other international body needs to be appropriately tested.

    Don’t Go It Alone!

    We know this is a lot to take in. If you need a development partner, Bresslergroup is always up for a good challenge. With our design strategists, mechanical and electrical engineers, and industrial and interaction designers, we’re well-equipped to assist in making a better product. (Read about our medical product design experience and expertise.)

    In addition to providing design and process recommendations and collaborating on system enhancements, we can plan, conduct, and report on all elements of your usability testing needs — from rapid insight testing all the way through validation.

    Hear experts from Bresslergroup and Jama Software discuss strategies for mitigating design-related problems with medical device development by watching our joint webinar, “Accelerate Medical Device Development While Reducing Risk.”

    ]]> Jama Test Management Center: Ensuring Product Quality and Managing Risk https://www.jamasoftware.com/blog/jama-test-management-center-ensuring-product-quality-and-managing-risk/ Thu, 29 Nov 2018 11:00:31 +0000 https://www.jamasoftware.com/?p=31324

    Testing is a critical phase of quality control. Failure to include a well-defined test strategy in an organization’s development process can make or break a product’s success in the market.

    The right combination of manual and automatic testing will result in a higher-quality product, and that’s ultimately what everyone wants for their users.

    With Jama’s Test Management Center, Quality Assurance (QA) teams can design reliable testing strategies resulting in defect-free products that adhere to even the strictest compliance standards.

    Constructing robust testing strategies requires broad strategic thinking, and more often than not teams begin formulating testing strategies too late in the game.

    By providing QA teams with more visibility earlier in the product development process, you’ll increase quality by identifying problems before they arise and devoting the right resources to fix them.

    A reliable testing strategy typically involves six phases. With the proper implementation, product developers can consistently deliver high-quality products that exceed expectations.

    Requirements Analysis

    In Jama Connect, every test case can be linked back to a requirement. This allows QA teams to have immediate visibility to test cases while clearly understanding the critical requirements behind each test.

    It’s crucial that teams understand specific feature and design expectations and are able to resolve any conflicts stemming from unclear or unspecified requirements. With this unique requirement linkage, teams can spend less time on requirements analysis and more time solidifying their test plans.

    Test Plan Creation

    The test plan is the most important phase in the testing strategy. This document outlines the entire testing process for a specific product.

    Well-executed and documented test plans ensure high-quality products. The success or failure of a product can depend on how well a test plan is carried out.

    QA teams want to achieve quality in efficient and risk-free ways, so it’s important that a well-formulated test plan can be reused as a template for additional test plans.

    With Jama Connect, teams can reuse the test plans they’ve created in Test Management Center for projects with the same requirements, saving time and increasing confidence in their compliance.

    Test Case Creation/Execution

    At the completion of your development cycle, it’s time to get testing with a well-documented test plan.

    With Jama’s Test Management Center, you can easily create and execute many types of manual tests, including functional tests, non-functional tests and maintenance tests.

    This is when all the stakeholders come together to review any product defects — often in the form of technical reviews centrally conducted within the system. Finally, after each test, a test report is generated that details a list of defects identified.

    Defect Logging/Fix

    It’s important for quality and development teams to work closely together with real-time visibility into defects across all teams.

    When QA teams log defects in Jama Connect, those defects will be immediately visible to development teams in ALM tools such as Jira, streamlining the end-to-end, find-fix-resolve process.

    While manual testing helps QA teams understand the entire context of a problem from an end-user perspective, Jama’s API makes it easy to link to automation tools such as Jenkins, TeamCity, Selenium and TestRail to run multiple tests in a short period of time.

    With Jama’s Test Management Center, organizations are empowered to manage product development and meet compliance standards at a faster pace.

    Have the testing phase down to a science? Great! Check out this short webinar to learn more about key metrics for product development success.

    ]]>
    New Test Runner App Enables Testing On The Go https://www.jamasoftware.com/blog/new-test-runner-app-enables-testing-go/ Thu, 14 Sep 2017 16:23:31 +0000 https://www.jamasoftware.com/?p=24998

    Pictured from left to right: Lauren Cooper, Will Huiras, Jason Ritz, Ben Lawrence, Devan Cakebread, Meghan McBee, and David Wagg.

    Jama Software is committed to our mission of helping customers bring their innovations to market faster, and with our reliable REST API we are doing just that. From custom data integrations to ETL tools, REST has proven to be an invaluable asset utilized by many of our customers.

    REST has also lead to some innovations for Jama. In 2016, we worked with students from Portland State University (PSU) in the Capstone Program to develop a trace visualization tool: OverView for Jama.

    Today, we are thrilled to announce our latest collaboration with this year’s PSU Capstone students, who developed the very first iOS mobile application for Jama, Test Runner. The application, which has been submitted to Apple’s App Store and is currently under review, will allow users to view and execute tests that have been assigned to them on the go.

    Motivated Team

    This year’s Capstone team was made up of seven talented students — Lauren Cooper, Will Huiras, Jason Ritz, Ben Lawrence, Devan Cakebread, Meghan McBee, and David Wagg — who are in their final year of computer science studies.

    They were eager to be sponsored by Jama because “the project had a clearly-defined scope and purpose,” said team member Lauren Cooper.

    Over several months, the students dove into pair programming and other Agile practices to plan, build, and test their iOS app.

    “Initially, we had a steep learning curve — we never wrote an iOS app before and were very new to Swift programming language,” said team member David Wagg.

    The students worked closely with Jama’s professional services and UX teams to build the Test Runner app.

    Amazing App

    When asked what they learned about users of the Test Runner app, Lauren Cooper said, “We have new, profound respect for product testers.”

    During their product demo of Test Runner for Jama staff, the team called out how refreshing it was to use an API that was well-documented, straightforward, and responsive. You can view the team’s open-source project now on GitHub.

    We loved seeing the results of this project and hope the PSU team’s work inspires our customers as much as it has us. A huge thanks to the entire team and PSU!

    Note: This post will be updated with a link to the Test Runner application on the Apple App Store when it becomes available. For now, users can clone the team’s repository on GitHub, and install the application onto their iOS devices via Xcode.

    ]]>
    Test Driven Development – It’s Not About the Tests https://www.jamasoftware.com/blog/test-driven-development/ Thu, 08 Dec 2016 20:27:49 +0000 https://www.jamasoftware.com/?p=23234 test-driven-top-post

    Test Driven Development (TDD) was introduced more than a decade ago by Kent Beck and has been widely adopted by effective, agile development teams. Still, after all this success, it is surprising how often we miss the reason for TDD. Here’s the surprising thing: Tests are not the point of Test Driven Development, they are a useful byproduct. When plants perform photosynthesis they produce oxygen as a byproduct, and we are all thankful for that. We are also thankful that TDD produces vital tests for our code, but the point of TDD, and the reason you should adopt it is something else.

    TDD Refresher

    TDD has 3 simple steps performed in short, tight cycles (perhaps as short as 30 seconds each when you are an expert):

    1. Write a small test that fails
    2. Write just enough code to make it pass
    3. Refactor the code until it is clean

    (repeat)

    In step 1, you are asking yourself, how do I communicate with this code and what is the most useful result I can expect back? Because you are writing the test first, you start by envisioning the best possible interface to the code and the simplest, most useful results. As you implement the code, reality and other constraints may nudge you off of this ideal, but it is always the starting point.

    For step 2, you write just enough code to make the test pass. Often, this will be a completely fake implementation just to get the test passing as quickly as possible. Red (i.e. failing) tests will tend to make you twitch in discomfort. If the real code necessary to turn the code green isn’t right at your fingertips,  implement something less real but satisfying the test’s demands. Now you have a functioning test that is protecting you from straying off the path of green.

    Step 3: You have a passing test, congrats! Now with the code constrained by the need to keep the test green, you refactor the code to make it conform to the rules of good design and craftsmanship. When your implementation is satisfactory, return to Step 1 by asking yourself, “What is the next test I need to write?”

    Misconceptions

    Although the TDD rules are simple, there are frequent misconceptions.

    • You write all the tests first before writing any code. No. As described above, write one small test, then one bit of code to make it pass. The tests and the code are implemented together in tight iterations.
    • TDD takes too long. If you haven’t been writing tests at all, then yes, more time is added to your development effort, but this is true if you write them first or last. What TDD teams realize once they start using it expertly is that overall development is faster because of reduced rework, refactoring, and bug fixing.
    • TDD is primarily a testing process. This is the point of this entire post, so let’s explore it in detail…

    The Evolution of a Test Driven Developer

    With any skill, it takes time to become an expert. Until you reach a certain level, the true benefits of that skill don’t reveal themselves. With TDD, there are 4 stages you must grow through.

    Stage 1: Write Unit Tests

    You have to start somewhere. Learn the unit testing framework(s) available for your development language. Leverage the shortcuts and plugins in your IDE that make unit testing as quick and easy as possible.

    Stage 2: Write Good Unit Tests

    What makes for good unit tests is the topic for another blog post, but in general, remember that unit tests test a unit of functionality, not a unit of code. Your tests are applying inputs and examining results for testing a small bit of functionality. How this functionality is realized in code is not their concern. Unit tests also run as independent units in isolation from one another. They can be run in any order and always get the same results. And they are fast. You’ll be running them frequently and so they must be as fast as possible.

    Stage 3: Write Good Unit Tests First

    You are now asking, ‘What’s the next test I need to write?’ instead of ‘What do I need to add to the code?’. When you’ve run out of ideas for tests, you are done with the code. You are always thinking first about the next bit of functionality that must be tested for, not about how it will be implemented. You will remain (happily and productively) at this stage for a long time.

    Stage 4: The tests drive design decisions

    It’s at this final step in your TDD evolution that you realize – it’s not about the tests. You now rely on TDD to drive good design. So finally we reach the point of this post: Test Driven Development is not a test tool, it’s a design tool.

    Here are a few ways this works:

    Rapid Feedback

    The short feedback loop of the TDD discipline gives you a quick and constant evaluation of your decisions. If tests are getting hard to write, there’s something wrong with the code, back up and take a look at it. If the tests are red for too long while you try to figure out an implementation, your design needs attention. Maybe break things into smaller, more easily understood chunks.

    Pressure Against Gold-plating/Future-proofing

    It’s tempting to write or extend code to handle conditions that might (but probably won’t) be needed in the future, and then, if you are doing test-last, forgetting to write a test for the currently unneeded features. When your tests are driving your development, you’re always trying to tease out the design details of your current concern – what is being asked for right now. When those concerns are met, you’re done. If design changes are requested in the future, you start from a great place with well-written tests to guide you, and clean code to work with.

    Pressure Against Complexity

    One of the challenges of test-last when you are trying to achieve a certain target for lines covered by tests, is reaching every dark corner of deeply nested branches and every last private method that the code under test relies on. Hallmarks of tests written after the fact for legacy code include a huge amount of setup and an alarming amount of mock-object use, all done to help the tests reach deep down into the complex code.

    When a test is driving development, every branch and every method is by default easily reachable. The tests dictate their creation, and there is no need to go back and figure out how to exercise a particular branch through the code. The well-written tests also serve as documentation about how the code is used. Also, because you are keeping the tests simple and concentrating on small increments of functionality, you are less likely to create complex cyclomatic craziness.

    Promoting Cohesion over Coupling

    I’ve recently reached a new stage in my testing philosophy. My practice has always been (in the Java world) to create a one-to-one relationship between the test class and the code under test. That is, for a Foo.java class, I would have a FooTest.java class containing all the tests for all the methods in Foo. I am now coming around to the idea that it is better to group tests by functionality. A test class that is focusing on a specific piece of functionality will naturally bring together, or bring about the creation of, objects that cohesively work together to perform that functionality. A test that is solely mapped to one object will naturally prefer that object to perform the entire function itself (even if it shouldn’t), leading to coupling and violations of the single responsibility principle.

    Test Driven Development is a guide. It will help you make good design decisions but will not guarantee it. TDD provides positive pressure – pressure than can be ignored or misinterpreted. Writing good software is an art requiring experience and discipline to maintain the correct balance between all the competing, conflicting pressures.

    Your Tests are Talking To You

    When you focus on quick TDD cycles and are sensitive to times when you struggle to write a test or struggle to complete the code that makes your test pass, then you will immediately be aware of design problems and will course correct. Every test you write is telling you something about your design. Every test you write first is trying to influence your design to be better.

    Better design, cleaner code, and a suite of tests. That’s a pretty good deal.

    ]]>
    Why Automation Projects Fail and How to Avoid the Pitfalls https://www.jamasoftware.com/blog/why-so-many-functional-automation-projects-fail/ Wed, 07 Sep 2016 17:59:13 +0000 https://www.jamasoftware.com/?p=22337 blog-featured-image-ROBO

    Overview

    Automation remains one of the most contentious topics in software development. You get three engineers into a room to discuss automation and you will end up with four contradicting absolutes and definitions. So for the purpose of this post, we will place the following limits on the topic:

    • Automation in this post will refer specifically to Automation of Functional Tests for a UI or API.
      • Functional Tests will be defined as tests containing a prescribed set of steps executed via an interface connected to consistent data which produces an identical result each time its executed.
    • Failure in this document will be defined as greater than 3 months of effort spent creating automation that is not acted upon or is determined to be too expensive or buggy to maintain after 12 months and is turned off or ignored.

    This post will cover the three most common reasons automation fails:

    1. Inability to describe a specific business need/objective that automation can solve.
    2. Automation is treated as a time-boxed activity and not a business/development process.
    3. Automation is created by a collective without a strong owner who is responsible for standards.

    Wait, Who are You?

    I am Michael Cowan, Senior QA Engineer at Jama Software. Over the past 20 years I have been a tester, developer, engineer, manager and QA architect. I have built automation solutions for Windows, Web, Mobile and APIs. I have orchestrated the creation of truly spectacular monstrosities that have wasted large sums of money/resources as well as designed frameworks that have enabled global teams to work together on different platforms, saving large(r) sums of money.

    I have had the amazing opportunity to be the lead designer and implementer of automation for complex systems that handled millions of transactions a day (Comcast), dealt with 20 year old systems managing millions of dollars (banking), worked in high security/zero tolerance (Homeland Security) environments and processed massive big data streams (Facebook partner). I have worked side by side with brilliant people, attended conferences and trainings, as well as given my own talks and lectures.

    I have a very “old school” business focused philosophy when it comes to automation. To me it is not a journey or an exploration of cool technology. Automation is a tool to reduce development and operating costs, while freeing up resources to work on more complicated things. I strongly believe that automation is a value add for companies that correctly invest in it, and a time/money sink for companies that let it run wild in their organizations.

    Failure Reason #1: Unable to describe a specific business need/objective that automation can solve

    The harsh truth is that, by itself, clicking a button on a page (even a really cool/difficult/complex custom button) has no value to the business. The business doesn’t care if you click that button manually with the mouse, execute it with Javascript, call the business logic via API or directly manipulate the database. What they care about is ensuring that a customer is not going to call up after the release to return the product, or some blogger wont discover a major issue and drive away investors with an scathing review.

    Automation Projects fail when they are technical exercises that are not tied to specific business needs. If the ROI (Return on Investment) is not clearly understood, you are unlikely to get the funding to do automation correctly. Instead you will find your efforts rushed to just implement ‘Automation’ and move on. Months later, everyone is confused why automation hasn’t been completed, why it doesn’t do x, y, z and why all the things they assumed would be included were never planned.

    Nothing is worse than a team of automation engineers thinking they are making great progress, just to have the business decide to pull apart the team due to a lack of understanding the value. If you are running automation directly tied to an understood business need, then the business leaders will be invested. You will find support because your metric will clearly show the value being produced.

    Another consequence of running automation as an engineering project is in making decisions based on technology instead of business need. If you decide upfront to use some open source tool you read about, you will find yourself telling the business what it (you) can’t do. Well no, our tool doesn’t hook into our build server, but we can stop writing tests and build a shim. Pretty soon you are spending all your time converting your project to be more feature rich, instead of creating the test cases the business needs. This is how teams can spend 6-12 months building a handful of simple automation scripts. Even worse, you end up with a large code base that now needs to be maintained. The majority of your time will have been spent building shims, hacks and adding complexity that has nothing to do with your business offering or domain.

    Mitigation

    It’s actually very easy to avoid this pitfall. Don’t start writing tests until you have a plan for what automation will look like when its fully implemented. If your team practices continuous integration (running tests as part of the build), don’t start off with a solution that doesn’t have built in support for your CI/Build system. Find an industry standard tool or technology that meets the business needs, create a POC (Proof of Concept) that proves your proposed solution integrates correctly and can generate the exact metrics the business needs.

    Write a single test to showcase running through your system and generating metrics. Make sure the stakeholders take the time to understand the proposed output and that they know what decisions that information would impact. Get a documented agreement before moving forward and then drive everything you do to produce those metrics. If anything in the business changes, start with reevaluating the metrics and resetting expectations. Once everyone is on the same page start working backwards to update the code. Consistency and accuracy in your reports will be worth more to the business than any cool technical solution or breakthrough that you try to explain upwards.

    If you are in management, you might consider asking for daily automation results with explanations of all test failures. If the team can not produce that information, have them stop building test cases until the infrastructure is done.

    Key deliverables that should be produced before building out automated scripts:

    • A documented business plan/proposal that clearly lays out the SMART goal you are trying to accomplish.
      • Signed off by the most senior owners of technology in your company.
      • This should be tied to their success.
    • Clear measurements for success. E.g. Reduce regression time, increase coverage for critical workflows, etc.
    • The reports and metrics you will need to support the measurement.
      • Your proposal should include a template with sample (fake) data for all reports.
    • A turnkey process generates those reports and metrics from automation results data.
    • 1 single automated test that does a simple test on a real system and generates a business report.

    Take away

    The key takeaway is that business and project management skills are critical to the success of any automation initiative. Technical challenges pale in comparison to the issues you will have if you are not aligned with the Business. Don’t start writing code until you have gotten written approval and have a turnkey mechanism to produce the metrics that will satisfy your stakeholders. Remember your project will be judged by the actionable metrics it produced, not by demoing buttons being clicked.

    Failure Reason #2: Automation is treated as a time-boxed project and not part of the software development process

    Automation is not an 8 week project you can swarm on and then hand off to someone else to ‘maintain’. A common mistake is to take a group of developers to ‘build the automation framework’ and then hand it off to less technical QA to carry forward. Think about that model for your company’s application. Imagine hiring 6 senior consultants to build version 1 in 8 weeks and then handing the entire project off to a junior team to maintain and take forward.

    Automation is a software project. It has the same needs for extensibility and maintainability as any other project. As automation is written for legacy and new features, constant work needs to be done to update the framework. New requirements for logging, reporting or handing new UI functionality. As long as you are making changes to your application, you will be updating the automation. Also keep in mind most automation frameworks are tied into other systems (like build, metrics, cloud services) and everything needs to stay in sync as they evolve.

    You quickly end up in a situation where junior engineers are in over their heads and either have to stop working on automation until expert resources free up, or they go in and erode the framework with patches and hacks. The end result is conflict which lowers ROI, generates a perception of complexity and difficulty and eventually leads to the failure of the project.

    Mitigation

    Again, this is an easy pitfall to avoid. Your business plan for automation should include long term resources that stay with automation through its initial lifecycle. It’s still beneficial to bring in experts during key parts of framework creation, but the owner(s) of the automation need to be the lead developers. They will build the intimate knowledge required to grow and refactor the framework as tests are automated.

    Additionally, leverage industry standard technologies. Automation is not an area you want to be an early adopter. If your organization is building a web application you will want to pick a framework like selenium instead of something like m-kal/DirtyBoots. A good standard as a manager, you should be able to search LinkedIn for the core technologies your team is proposing and find a number of experienced people in them. No matter how awesome a mid level engineer tell you this new technology is, when he leaves, the next person will insist on rewriting it.

    Take away

    If you are using standard technologies and industry best practices, you will not need an elite team of devs to build the framework for QA. The complexity for the automation project should remain fairly constant through the life of your company’s application updates, new features, UI uplifts. The original creators of the framework should be the same ones automating the bulk of the tests. Additional, less experienced scripters can be added to increase velocity, but a consistent core group will produce the beast results for the least investment.

    Failure Reason #3: Automation is created by a collective without a strong owner who is responsible for standards

    Making the automation framework a community project is a very expensive mistake. If your company created a new project initiative with the guideline of “Take 3 months to build a good CRM System in any language that we will use internally” and turned that over to 10 random devs, to work on in their spare time, you would expect issues. Automation has the same limitations. A small dedicated team (with members that expect to carry Automation forward for at least a year or two) has the time to gather requirements, understand the business needs, build up the infrastructure and drive the project forward to success. An ad-hoc group with no accountability, especially one that the main members will not be doing the actual creation of tests, is going to struggle.

    Everyone wants to work on the fun POC stage of automation, hacking together technologies to do basic testing and reporting. Most QA has some experience with previous projects and they have their own ideas about what can and can’t work. Without strong leadership, an approved roadmap and strict quality controls you will end up with an ad-hoc project that does a variety of cool things, but you can never seem to tie it together to get the information you need for actionable metrics. The team always has low confidence in the tests or their ability to reproduce results reliably. There is always a reasonable sounding excuse of why. The fun drains away as weeks turn into months and your team finds other things to focus on, while automation stagnates.

    Eventually it becomes apparent how little business value was produced for all the effort, how much work remains and no clear owner to hold accountable or plan how to maintain or move forward. The champions for the fun part have shifted their attention to the next cool project. Management will end up assigning people to sacrifice their productivity by manually generating reports, cleaning up scripts and try training others to use it. Eventually everyone agrees the process sucks. Eventually a new idea/technology will surface and the cycle repeats itself.

    Another common mistake is assuming that adding more engineers to aid in writing automation will increase ROI. Remember that ROI is measured against the business objective, not lines of code. Unlike application development there are few established patterns when it comes to automation. This means 2 equally skilled automation engineers will write vastly different automated tests for the same features. Remember that adding less experienced engineers requires your experienced engineers to stop building automation and start a training, mentoring and code reviewing program. In order for this program to be successful, every test written needs to be reviewed by 1-2 people to ensure it fits. It will take months until the additional engineers will be autonomous and able to contribute without degrading the framework. Additionally the more complex the application, the more custom systems, features and controls it can contain. Each of these variations will need the more senior engineers to tackle them first. Even with these efforts the business has to accept that new automation engineers will not write the best tests, it can take years to build the skills and apply the concepts correctly. This is a large factor in the constant ‘do-overs’ that automation projects suffer from.

    I would assert that the ONLY business value, from automation, comes via the metrics and reports it produces. You could have the best automation in the world, but if it just clicks buttons and never produces an actionable report of its findings, then it has no value. Good automation will be structured in such a way as to produce a comprehensive report, that shows test coverage, is easy to understand and accurate release to release. Imagine having a large group of sales and marketing people, all working separately to generate their own KPIs from their own data. How cohesive would their reports be? Could the business make informed decisions with KPI from different groups at different scopes? The skill to structure and create valuable automation is not the same as being able to read the Selenium Documentation and click a button on a page.

    We should always be working towards an approved business objective. Is the business objective to write test cases as fast as possible, even if they can’t be maintained? Or is it to “Automate the Regression to free up QA for other tasks”. Shifting your engineers’ time from running manual regressions, to babysitting automation does not solve anything (and actually reduces the test coverage). In certain cases, slower test case development by a smaller team of experienced engineers is the way to go. As long as you build in redundancy and have their work open for review and feedback you will produce value much faster.

    Mitigation

    Building automation that can generate reliable and actionable metrics is non-trivial and requires a lot of structure, discipline and previous experience. Automation projects should always be championed by 1-2 engineers experienced with setting up automation projects. They should make a compelling case to the business on what they want to build and what value it will bring. Once the business signs off, they should be given the space to build out the initial POC framework and sample test case(s). Once a working prototype is in place, feedback is solicited and then the project moves forward. The core team should be 2-3 engineers who are equals. Once all the critical areas are automated and the framework is hardened, you can begin training up interested individuals by pairing them with an experienced member of the team.

    This initial work should be done by a core team of 2-3 engineers. They will be held accountable for its success or failure. It’s critical to make this group show working tests for all the main/critical areas of the product. It’s these initial tests that expose gaps in the framework. Once a working set of automated tests have been completed and tested from kickoff to report delivery, you can discuss training a small group to start building out test cases and moving automation into other teams.

    Take away

    When looking at an automation report, you need to be able to understand, at a glance, what was tested and what wasn’t. When you have questions on failed tests, you need to be able to quickly understand what the test did and the results. All tests should have the same scope and voice. Imagine if Feature X only has 5 tests with 100 steps each while Feature Y has 100 tests with 5 steps each, how do you combine those data points to understand the real state of the product. As the group gets larger and larger, it’s harder to maintain a single voice. You will move much faster allowing your core group to solve these problems before introducing less experienced engineer.

    Summary

    In this post I discussed the three most common reasons automation fails, ways to avoid them, and keep your projects focused on increasing ROI and business value.

    ]]>
    Characteristics of a Good Test Management System https://www.jamasoftware.com/blog/characteristics-good-test-management-system/ Tue, 20 Jan 2015 00:11:11 +0000 https://www.jamasoftware.com/?p=17253

    In product development, designing a good test strategy is challenging in a number of ways and requires broad, strategic thinking. The goal of testing is to ensure that you release a best-quality product that meets customer expectations as documented in your early design concept and requirements gathering phases. Here we’ve laid out the key elements to a successful test strategy.

    Prioritize test cases for efficiency and quality

    It’s vital to prioritize testing by relevancy and not perform testing that is irrelevant. Without proper planning, testing can be one of the most expensive phases of the development lifecycle. To determine test case relevancy, trace test efforts to the documented primary objectives of the product or system and prioritize test plans from there. Managing all levels of test cases and maintaining their traceability to objectives and requirements is very important from a relevance standpoint, and prevents costly tests of functionality that may be lower priority or even changed or deprecated from the product.

    While many teams may be tempted to cut corners in order to save time or money, it is important to balance these perceived cost savings and product quality. In the end, if the product doesn’t meet the original objectives, money will be lost, not saved.

    Realize value from your test strategy

    It is very important to conduct reporting and analysis of test results that is deep enough to realize the value of the test strategy. In testing, not only do we want to confirm that we’ve met the objectives of our requirements for the product or system, but we also want to learn new things during all phases of testing. New requirements should be captured as part of results analysis, and these should be incorporated into the next phase of product development. Value from testing is realized by incorporating the results of the test strategy into the product strategy. Testing is the mechanism that proves whether the product strategy is effective.

    Early and frequent testing allows for innovation

    Conducting some form of testing, at every stage of the product development lifecycle, is highly recommended. While it may be time consuming to coordinate stakeholders and then collect and analyze data, getting the right feedback at the right time ensures that you can deliver a high-quality product on time.

    In the early stages of development, performing customer exploratory testing is the most cost-effective way to make sure your product strategy is on the mark. Moreover, fostering collaboration between developers and the customer early on — an Agile best practice — allows for instant feedback that gives development teams the clarity they need to iterate and generate more innovation in the product.

    At the end of the development lifecycle, conduct system integration tests to ensure components are working harmoniously. Unit tests are beneficial to test various inputs and outputs, performance characteristics, and boundary limits, whether you’re building a hardware or software system.

    How To Design a Good Test Management Strategy

    1. Provide mechanisms to trace tests to product objectives and their associated costs, risks, and priorities.
    2. Provide mechanisms for all stakeholders to participate (customers, developers, testers, requirements engineers, and product managers).
    3. Allow large enterprises to coordinate, track, and manage many software testing projects and teams across multiple locations
    4. Make it easy to create, view and report linkage between requirements, test cases, test data, test scripts, test results, and defects.
    5. Ensure your process passes test and requirement data between specialized test tools and requirements repositories in an automated fashion.
    6. Provide analytics to testing progress and status through dashboards, reports and custom queries.
    ]]>