Zephyrnet Logo

Generative AI in software development and testing: Use cases & best practice | IoT Now News & Reports

Date:

ChatGPT has made the power of generative AI accessible to all, and it’s something that is being widely embraced. A Gartner poll from May this year tells us that ChatGPT has prompted an increase in AI investment, with 70% of organisations saying that they were in ‘exploration mode’ with the tech, and VC firms investing more than $1.7 billion in generative AI solutions in just the last three years.

Multiple sectors stand to gain from generative AI’s capabilities for guidance and automation, but software development and testing will be entirely disrupted. Everything that we as developers and testers do is going to be augmented by AI, with some practices being completely replaced or supplanted. ChatGPT can already build 90% of the code that developers require. With some prompt engineering, it can get 100% of the way there much faster than a human could.

This holds enormous potential for productivity and output gains. But it also means that the success of quality engineering relies on fostering cross-functional collaboration within and beyond an organisation and, frankly, beyond the human species. By adopting some best practice principles, quality engineering teams can help guarantee success throughout the generative AI integration process.

Refining your integration strategy

In the era of generative AI, the pace of change in software development and testing has reached unprecedented levels. With sophisticated technology improving go-to-market time, competitors’ products are hitting the shelves faster than ever before, and digital experience is becoming a new competitive differentiator. Your application needs to be accessible, run smoothly, and all but eliminate bugs and outages just to be considered ‘functional’!

Despite its limitations, generative AI can be enormously useful when playing out scenarios and allowing us to think about problems in new ways, increasing our confidence in any endeavour. The industry needs to experiment with ways to utilise this to predict where things will go wrong, and for iterating ideas and hypotheses.

What are the key areas to focus on when it comes to integration, and how do we derive the most value out of generative AI?

Best practices for generative AI

Firstly, encouraging a culture of feedback and learning, where teams can openly share insights and lessons learned, is critical for continuous improvement in quality engineering. Bringing generative AI models into these feedback loops will enhance your team’s ability to spot mistakes and rectify them early on.

Establishing mechanisms for gathering feedback from end-users, stakeholders, and customer support teams – and for feeding this information into your AI – will help you to prioritise quality enhancements. The aim should be to create effective feedback loops that can combine human intelligence (HI) with AI, continuous testing (CT) and continuous monitoring (CM) methods, ensuring releases become more reliable and error-free.

Secondly, it’s really important that generative AI models undergo rigorous verification and testing to assess their reliability, accuracy, and performance. Recognise the technology’s limitations, develop robust validation procedures to evaluate the outputs and establish comprehensive testing frameworks – this is going to help you uncover potential biases within the AI models.

The ‘gold standard’ for verification needs to be a robust testing method that doesn’t automatically trust the AI. The beauty of generative AI is that you can invite your stakeholders to weigh in or provide sentiment before taking its answers verbatim, and it’s these interactions that will improve the AI model over time, as well as the quality of its answers.

Another key consideration should be adopting a data-driven approach – this can greatly enhance the effectiveness and efficiency of quality engineering. So harness the power of that data.

Leverage all your test results, defect reports, performance metrics, and synthesise this corpus of information with AI to help spot patterns and provide insights into the quality of your software. Use AU to define your key metrics and KPIs that are going to support overall quality metrics.

The last thing you need to pay attention to is continuous monitoring. Quality engineering should not be limited to pre-production continuous testing alone. Implement continuous monitoring mechanisms to capture real-time data on system performance, usage patterns, anomalies, and user feedback. This enables proactive identification of issues and iterative improvements and ultimately warns of impending failure before it occurs. This will drive continuous improvement in software quality.

Applications for generative AI in quality engineering

PwC reported that 46% of executives will invest in generative AI over the next 12-18 months. This is testament to the growing number of use cases across key industries like healthcare, energy, and logistics. Three of the most useful applications for generative AI in QE specifically include test data generation, defect prediction and analysis and test optimisation and prioritisation.

For example, AI-powered generative models can create synthetic test data that closely resembles real-world scenarios. This eliminates the need for manual data creation or extraction, reducing the time and effort involved in test data management. Quality engineers can leverage generative AI to quickly generate large-scale, diverse, and realistic test datasets, facilitating comprehensive testing and reducing data-related bottlenecks.

Generative AI algorithms can also be trained on historical defect data to predict potential defects in software systems. By analysing code patterns, design structures, and test coverage, AI models can identify areas prone to defects and provide early warnings. Quality engineers can proactively address these issues, improving the overall quality of the software and reducing the time and cost associated with defect detection and resolution.

The test suite can be optimised by using Generative AI to prioritise test cases based on criticality, code coverage, and risk factors. AI algorithms can analyse code changes, historical test results, and system complexity to determine the most effective test scenarios. By intelligently selecting and prioritising tests, quality engineers can achieve higher test efficiency, faster feedback cycles, and improved software quality.

These use cases are all being used in real QE scenarios to support business operations. However, AI can also play a critical role in the planning stage. Developers and testers can (and should) use different AI models to generate new ideas and approaches by asking difficult questions, and analysing not only the AI’s answers but the approaches taken for reaching the conclusion. Rather than expecting a ‘correct’ answer from the AI, QE teams can learn a lot by simply experimenting. This will become a critical part of the value we derive from AI in the future.

Looking ahead

Software testers and developers are relatively ahead of the curve in their thinking about what generative AI means for the future. Our jobs are already being redefined, questions are being asked about what skills are still required, and specialist knowledge related to the application of AI in our industry is being developed really quickly. But for everyone, no matter their occupation, the future is being written with AI at the forefront – this is undeniable.

The implications of this will be far reaching. The most important thing for businesses is to remain agile. AI is fast moving, and staying on top of new technological developments will be critical for success. Nailing down your integration strategy and rigorously maintaining best standard practices like those mentioned above will be essential in achieving business aims and future-proofing operations.

Article by written by Bryan Cole, Director of Customer Engineering, Tricentis.

Comment on this article below or via X: @IoTNow_

spot_img

Latest Intelligence

spot_img