← Back to Blog

December 10, 2025 · 105 views

Best Practices to use AI models for a QA

Testing QA AI
Best Practices to use AI models for a QA

After spending the last two years integrating AI into my daily testing workflows, I’ve learned one undeniable truth: AI is not a magic wand; it is a high-speed intern.

If you treat a Large Language Model (LLM) like a senior engineer who knows your entire codebase by heart, you will be disappointed. But if you treat it like a brilliant but junior assistant who needs clear guidance and supervision, it will 10x your productivity.

Through trial, error, and thousands of prompts, I’ve distilled my experience into five core best practices. Here is how to get the most out of AI for Quality Assurance.


1. Write Clear Instructions

Why is it important:

AI models operate on the principle of "Garbage In, Garbage Out." If your prompt is vague, the model will fill in the gaps with assumptions—and those assumptions are often wrong. To get usable test cases or code, you must be explicit about the scope, the format, the tools, and the constraints.

Example:


2. Split Complex Tasks into Simpler Subtasks

Why is it important:

LLMs have a limited "context window" and can lose the thread of logic if asked to do too much at once. If you ask an AI to "Build an entire automation framework from scratch," it will likely hallucinate or provide a shallow, broken skeleton. Breaking tasks down ensures high-quality output for every component.

Example:

Instead of asking for the whole framework at once, try this sequence:

  1. "Generate a folder structure for a Pytest framework."
  2. "Now, write the conftest.py file to handle browser setup and teardown."
  3. "Create a BasePage class with common methods like click and enter_text."
  4. "Finally, write the actual test script for the 'Add to Cart' feature inheriting from BasePage."

3. Provide References

Why is it important:

The AI does not know your private documentation, your specific API endpoints, or your user stories unless you provide them. "Grounding" the model by pasting relevant context reduces hallucinations and ensures the output is actually applicable to your project.

Example:


4. Give the Model Time to Think

Why is it important:

This is often called "Chain-of-Thought" prompting. If you ask for the final answer immediately, the model might guess. If you ask it to plan its approach first, it "reasons" through the logic, resulting in significantly higher accuracy.

Example:


5. Test Changes Systematically (Git & Verify)

Why is it important:

AI-generated code is prone to subtle bugs, deprecated library usage, or logic errors. If you generate five different scripts and paste them all into your project at once, debugging will be a nightmare. You must treat AI code as untrusted until verified.

The Workflow:

  1. Generate: Ask the AI for a specific function or test.
  2. Verify: Run the code immediately. Does it compile? Does the test pass?
  3. Commit: If it works, commit it to Git.
  4. Repeat: Move to the next task.

Example:

"I am building a utility file. I will ask you to add functions one by one.

By committing only working code, you ensure you always have a 'safe save point' to return to if the AI leads you down a rabbit hole.


Final Thoughts

AI is a powerful accelerator for QA, but it requires a pilot. By being specific, breaking down work, providing context, allowing for reasoning, and rigorously verifying the output via version control, you turn a chaotic chatbot into your most valuable testing asset.


Frequently Asked Questions

Why does the AI keep giving me generic test cases that don't match my project?

This usually happens because the prompt is too vague. As mentioned in Best Practice #1, AI models function on "Garbage In, Garbage Out." If you don't specify the tool (e.g., Playwright), the language (e.g., TypeScript), and the design pattern (e.g., Page Object Model), the AI will guess. Always be explicit about your constraints.

Can I just copy and paste the code the AI generates directly into my project?

Never. You should treat AI-generated code as "untrusted" until verified. As outlined in Best Practice #5, you must run the code to ensure it compiles and passes the test. Only after verification should you commit it to Git. This prevents broken logic from polluting your codebase.

📚 How to Cite This Article

APA Format:

I enjoy building things that live on the internet. (2025). Best Practices to use AI models for a QA. Steti.info. https://steti.info/blog/best-practices-to-use-ai-models-for-a-qa

MLA Format:

I enjoy building things that live on the internet. "Best Practices to use AI models for a QA." Steti.info, 10 Dec. 2025. https://steti.info/blog/best-practices-to-use-ai-models-for-a-qa.

Chicago Style:

I enjoy building things that live on the internet. "Best Practices to use AI models for a QA." Steti.info. December 10, 2025. https://steti.info/blog/best-practices-to-use-ai-models-for-a-qa.

Published: December 10, 2025
Last Updated: December 13, 2025

About the Author

Author
I like to build from websites to web apps, I create digital experiences that solve real problems and delight users and the most important is that all that I build, I build with PEOPLE!
Learn more about the author →

Related Posts