There’s more to testing than simply testing – SDTimes.com

Rapid innovation and the digitalization of everything is increasing application complexity and the complexity of environments in which applications run. While theres an increasing emphasis on continuous testing as more DevOps teams embrace CI/CD, some organizations are still disproportionately focused on functional testing.

Just because it works doesnt mean its a good experience, said Thomas Murphy, senior director analyst at Gartner. If its my employee, sometimes I make them suffer but that means Im going to lose productivity and it may impact employee retention. If its my customers, I can lose retention because I did not meet the objectives in the first place.

Todays applications should help facilitate the organizations business goals while providing the kind of experience end users expect. To accomplish that, software teams must take a more holistic approach to testing than they have done traditionally, which involves more types of tests and more roles involved in testing.

The patterns of practice come from architecture and the whole idea of designing patterns, said Murphy. The best practices 10 years ago are not best practices today and the best practices three years ago are probably not the best practices today. The leading practices are the things Google, Facebook and Netflix were doing three to five years ago.

Chris Lewis, engineering director at technology consulting firm DMW Group, said his enterprise clients are seeing the positive impact a test-first mindset has had over the past couple of years.

The things Ive seen [are] particularly in the security and infrastructure world where historically testing hasnt been something thats been on the agenda. Those people tend to come from more traditional, typically full-stack software development backgrounds and theyre now wanting more control of the development processes end to end, said Lewis. They started to inject testing thinking across the life cycle.

Nancy Kastl, executive director of testing services at digital transformation agency SPR, said a philosophical evolution is occurring regarding what to test, when to test and who does the testing.

Regarding what to test, the movement continues away from both manual [and] automated UI testing methods and toward API and unit-level testing. This allows testing to be done sooner, more efficiently and fosters better test coverage, said Kastl.

When means testing earlier and throughout the SDLC.

Companies are continuing to adopt Agile or improve the way they are using Agile to achieve its benefits of continuous delivery, said Kastl. With the current movement to continuous integration and delivery, the shift-left philosophy is now embedded in continuous testing.

However, when everyones responsible for testing, arguably nobodys responsible, unless its clear how testing should be done by whom, when, and how. Testing can no longer be the sole domain of testers and QA engineers because finding and fixing bugs late in the SDLC is inadequate, unnecessarily costly and untenable as application teams continue to shrink their delivery cycles. As a result, testing must necessarily shift left to developers and right to production, involving more roles.

This continues to be a matter of debate. Is it the developers, testers, business analysts, product owners, business users, project managers [or] someone else? said Kastl. With an emphasis on test automation requiring coding skills, some argue for developers to do the testing beyond just unit tests.

Meanwhile, the scope of tests continues to expand beyond unit, integration, system and user acceptance testing (UAT) to include security, performance, UX, smoke, and regression testing. Feature flags, progressive software delivery, chaos engineering and test-driven development are also considered part of the testing mix today.

Security goes beyond penetration testingOrganizations irrespective of industry are prioritizing security testing to minimize vulnerabilities and manage threats more effectively.

Threat modeling would be a starting point. The other thing is that AI and machine learning are giving me more informed views of both code and code quality, said Gartners Murphy. There are so many different kinds of attacks that occur and sometimes we think weve taken these precautions but the problem is that while you were able to stop [an attack] one way, theyre going to find different ways to launch it, different ways its going to behave, different ways that it will be hidden so you dont detect it.

In addition to penetration testing, organizations may use a combination of tools and services that can vary based on the application. Some of the more common ones are static and dynamic application security testing, mobile application security testing, database security testing, software composition analysis and appsec testing as a service.

DMW Groups Lewis said his organization helps clients improve the way they define their compliance and security rules as code, typically working with people in conventional security architecture and compliance functions.

We get them to think about what the outcomes are that they really want to achieve and then provide them with expertise to actually turn those into code, said Lewis.

SPRs Kastl said continuous delivery requires continuous security verification to provide early insight into potential security vulnerabilities.

Security, like quality, is hard to build in at the end of a software project and should be prioritized through the project life cycle, said Kastl. The Application Security Verification Standard (ASVS) is a framework of security requirements and controls that define a secure application with developing and testing modern applications.

Kastl said that includes:

The OWASP Top 10 is a list of the most common security vulnerabilities, said Kastl. Its based on data gathered from hundreds of organizations and over 100,000 real world applications and APIs.

Performance testing beyond load testingLoad testing ensures that the application continues to operate as intended as the workload increases with emphasis on the upper limit. By comparison, scalability testing considers both minimum and maximum loads. In addition, its wise to test outside of normal workloads (stress testing), to see how the application performs when workloads suddenly spike (spike testing) and how well a normal workload endures over time (endurance testing).

Performance really impacts people from a usability perspective. It used to be if your page didnt load within this amount of time, theyd click away and then it wasnt just about the page, it was about the performance of specific elements that could be mapped to shopping cart behavior, said Gartners Murphy.

For example, GPS navigation and wearable technology company Garmin suffered a multi-day outage when it was hit by a ransomware attack in July 2020. Its devices were unable to upload activity to Stravas mobile app and website for runners and cyclists. The situation underscores the fact that cybersecurity breaches can have downstream effects.

I think Strava had a 40% drop in data uploads. Pretty soon, all this data in the last three or four days is going to start uploading to them so theyre going to get hit with a spike of data, so those types of things can happen, said Murphy.

To prepare for that sort of thing, one could run performance and stress tests on every build or use feature flags to compare performance with the prior build.

Instead of waiting for a load test at the end of a project to detect potential performance issues, performance tests can be used to baseline the performance of an application under development.

By measuring the response time for a single user performing specific functions, these metrics can be gathered and compared for each build of the application, said Kastl. This provides an early warning of potential performance issues. These baseline performance tests can be integrated with your CI/CD pipeline for continuous monitoring of the applications performance.

Mobile and IoT devices, such as wearables, have increased the need for more comprehensive performance testing and theres still a lot of room for improvement.

As the industry has moved more to cloud-based technology, performance testing has become more paramount, said Todd Lemmonds, QA architect at health benefits company Anthem, a Sauce Labs customer. One of my current initiatives is to integrate performance testing into the CI/CD pipeline. Its always done more toward UAT which, in my mind, is too late.

To affect that change, the developers need to think about performance and how the analytics need to be structured in a way that allows the business to make decisions. The artifacts can be used later during a full systems performance test.

Weve migrated three channels on to cloud, [but] weve never done a performance test of all three channels working at capacity, said Lemmonds. We need to think about that stuff and predict the growth pattern over the next five years. We need to make sure that not only can our cloud technologies handle that but what the full system performance is going to look like. Then, you run into issues like all of our subsystems are not able to handle the database connections so we have to come up with all kinds of ways to virtualize the services, which is nothing new to Google and Amazon, but [for] a company like Anthem, its very difficult.

DMW Groups Lewis said some of his clients have ignored performance testing in cloud environments since cloud environments are elastic.

We have to bring them back to reality and say, Look, there is an art form here that has significantly changed and you really need to start thinking about it in more detail, said Lewis.

UX testing beyond UI and UATWhile UI and UAT testing remain important, UI testing is only a subset of what needs to be done for UX testing, while traditional UAT happens late in the cycle. Feature flagging helps by providing early insight into whats resonating and not resonating with users while generating valuable data. Theres also usability testing including focus groups, session recording, eye tracking and quick one-question in-app surveys that ask whether the user loves the app or not.

One area that tends to lack adequate focus is accessibility testing, however.

More than 54 million U.S. consumers have disabilities and face unique challenges accessing products, services and information on the web and mobile devices, said SPRs Kastl. Accessibility must be addressed throughout the development of a project to ensure applications are accessible to individuals with vision loss, low vision, color blindness or learning loss, and to those otherwise challenged by motor skills.

The main issue is a lack of awareness, especially among people who lack firsthand or secondhand experience with disabilities. While there are no regulations to enforce, accessibility-related lawsuits are growing exponentially.

The first step to ensuring an applications accessibility is to include ADA Section 508 or WCAG 2.1 Accessibility standards as requirements in the products backlog along with functional requirements, said Kastl.

Non-compliance to an accessibility standard on one web page tends to be repeated on all web pages or throughout a mobile application. To detect non-compliant practices as early as possible, wireframes and templates for web and mobile applications should be reviewed for potential non-compliant designed components, Kastl said. In addition to the design review, there should be a code review in which development teams perform self-assessments using tools and practices to identify standards that have not been followed in coding practices. Corrective action should be taken by the team prior to the start of application testing. Then, during in-sprint testing activities, assistive technologies and tools such as screen readers, screen magnification and speed recognition software should be used to test web pages and mobile applications against accessibility standards. Automated tools can detect and report non-compliance.

Gartners Murphy said organizations should be monitoring app ratings and reviews as well as social media sentiment on an ongoing basis.

You have to monitor those things, and you should. Youre feeding stuff like that into a system such as Statuspage or PagerDuty so that you know somethings gone wrong, said Murphy. It may not just be monitoring your site. Its also monitoring those external sources because they may be the leading indicator.

Excerpt from:

There's more to testing than simply testing - SDTimes.com

Related Posts

Comments are closed.