MacMusic  |  PcMusic  |  440 Software  |  440 Forums  |  440TV  |  Zicos
testing
Search

Why Autonomous, AI-based Software Tests Save Costs

Tuesday May 4, 2021. 09:40 PM , from eWeek
Software testing is often one of the most expensive—and inefficient—components of an IT budget. Most IT leaders are shocked to learn how much their organization spends on software testing: During the past five years, software testing has consumed an average of 23% to 35% of a company’s overall IT spend. Given that most IT departments have hundreds of projects and thousands of applications to test, these costs quickly add up; you can see why it eats up an exorbitant chunk of the IT budget.
It doesn’t have to be this way. There’s a huge opportunity to unlock substantial cost savings by optimizing testing. In this eWEEK Data Points article, Martin Klaus, Vice-President at Tricentis, uses his own industry information to outline six ways business leaders can see major cost-saving by upgrading their testing practices.
Data Point No. 1: Democratize automation
Taking a step back and looking at the big picture of an organization’s IT infrastructure, automation can make the difference of propelling a business forward or setting it back with continued manual development and maintenance. More than 80% of testing is still manual, because many of the testers who verify functional requirements may not have the technical skills needed to write automation scripts. That creates a bottleneck in businesses and consumes resources without achieving the speed, precision and scalability required for modern delivery processes.
While it’s unrealistic for an organization to fully get rid of manual technical work, there are ways to reduce it and add automation, which will ultimately lower the risk of functional issues and delays.
Test automation doesn’t necessarily require deep technical programming skills. No-code, model-based or scriptless automation that focuses primarily on business processes, system integration or user acceptance, empowers everyone to achieve a much higher level of productivity and lower manual efforts. As such, the sooner IT departments can automate user-acceptance testing, the better they’ll be from a risk and a business value standpoint.
Data Point No. 2: Prioritize business-risk coverage
Given that most IT departments have hundreds of projects and thousands of applications to test, these costs quickly add up and digest an exorbitant chunk of the IT budget and time. This is time you may not have, if your approach is to achieve 100% test coverage for every single release. Additionally, a pass/fail rate may not give you an accurate picture if the most critical functionality that exposes the highest business risk has been adequately tested. This is where the 80:20 rule starts to play an important role, which suggests that 80% of the users only use 20% of the functionality.
When you weigh test cases by severity and frequency that a failure can impact the business, you can much better prioritize what to test. You’ll get much better business-risk coverage for much less work. More importantly, the pass/fail rate of your tests gives you a far better indication of the risk a potential application failure may cause in terms of business disruption.
Your test automation will also be far more effective when it is optimized for business-risk coverage, and you’ll be able to deploy application releases much more frequently.
Data Point No. 3: Use your DevOps superpowers
Everyone knows that the DevOps superpowers of continuous testing, continuous integration and continuous deployment are essential for driving large-scale software delivery in high-performing teams. It’s all about efficiency, quality and optimizing the software value stream. Yet despite the existence of these proven methods, many organizations are still struggling to master enterprise-scale software delivery in order to deliver quality at speed.
Siloed and disconnected testing efforts that aren’t integrated with an overall delivery pipeline commonly lead to duplicated efforts as well as avoidable rework. In fact, more than half of the tests being built, maintained and executed are redundant and add negligible value to the testing effort.
Frustrating rework, misunderstandings and mistakes can be prevented by coordinating all QA activities in a central platform that syncs efforts across development, testing and project management. To deliver the greatest ROI in the shortest time, it’s important to promote communication, collaboration and transparency for teams to align on common goals and success metrics and invest in tools that align testing with business’s risks.
Data Point No. 4: Simulate applications and services
Testers often need to operate at a time when the application is still a work in progress which typically results in a situation where the tester becomes the bottleneck to application delivery. The best strategy to overcome this problem is what is called orchestrated service virtualization (OSV).
The concept of OSV is simple: Stub out any functionality that is not ready for testing and simulate its behavior. As the modules and components are getting ready to be verified for functional correctness, you replace simulated behavior with live code. This removes the single biggest barrier to achieving continuous testing: access to a complete application under test, with all dependent systems configured with the appropriate configuration, functionality and test data.
As such, by simulating these dependencies, you can ensure that your tests will encounter the appropriate dependency behavior and data each and every time that they execute. It might also reduce your testing cost if your application requires expensive hardware or cloud infrastructure and ensures that testing can proceed even if test environments are unavailable or unstable.
Data Point No. 5: Shift-left testing
For decades, testing was traditionally deferred until the end of the cycle. As a result, test teams are a step behind developers in the process, which means they will provide important feedback too late in the process, and create bottlenecking to application delivery. Even to this day, the vast majority of testing is performed at the UI-level, but UIs typically aren’t completed until the very end of each development cycle.
With new testing technologies incorporating AI, teams can begin to build automation test cases much earlier in the process, starting before the UI exists with just a mockup or a low-fidelity prototype. These types of AI-based solutions for testing allow products to reach the market much sooner, because it eliminates testing delays by providing instant and actionable feedback to people who really care about it at a time when they are truly prepared to act on it.
Another benefit is being able to test at the API layer and use AI-based technologies to create UI tests before UIs are completed. As a result, there is an instant-feedback loop between functional test behavior and development teams while the application is being built and as it moves from idea to concept, prototype and ultimately into production.
When it comes to digital transformation, it’s not simply a matter of more tools or different tools. It requires a deeper change across people and processes as well as technologies. Software testing as a key example for success moving forward.
 
The post Why Autonomous, AI-based Software Tests Save Costs appeared first on eWEEK.
https://www.eweek.com/news/why-autonomous-ai-based-software-tests-save-costs/
News copyright owned by their original publishers | Copyright © 2004 - 2024 Zicos / 440Network
Current Date
Apr, Wed 24 - 11:58 CEST