Certified Tester – Testing with Generative AI (CT-GenAI)
Osallistumismuoto
Remote
Kesto
2 päivää
Hinta
2849 €
This two-day course provides software testing professionals with the knowledge and practical skills to leverage Generative AI (GenAI) and Large Language Models (LLMs) in software testing. Participants will learn GenAI fundamentals, prompt engineering, risk management, infrastructure, and organizational integration, with hands-on exercises throughout.
By the end of this course, learners will be able to:
- Understand the fundamental concepts, capabilities, and limitations of GenAI and LLMs in software testing.
- Develop and refine prompts for effective use of GenAI in test analysis, design, automation, and reporting.
- Identify and mitigate risks (hallucinations, reasoning errors, biases, privacy, security, environmental impact) associated with GenAI in testing.
- Explain and experiment with LLM-powered test infrastructure, including Retrieval-Augmented Generation and LLMOps.
- Contribute to the adoption and integration of GenAI in test organizations, including change management and skills development.
Before taking the CT-GenAI exam, you must be certified ISTQB® Certified Tester Foundation Level (CTFL).
Target Audience
This course is designed for:
Anyone involved in using generative AI (GenAI) for software testing. This includes people in roles such as testers, test analysts, test automation engineers, test managers, user acceptance testers and software developers. This Testing with GenAI qualification is also appropriate for anyone who wants a basic understanding of using GenAI for software testing, such as project managers, quality managers, software development managers, business analysts, IT directors and management consultants.
Introduction to Generative AI for Software Testing
- AI spectrum: Symbolic AI, ML, Deep Learning, GenAI
- Basics of GenAI and LLMs (tokenization, embeddings, context windows)
- Types of LLMs: Foundation, Instruction-tuned, Reasoning
- Multimodal LLMs and vision-language models
- Hands-on: Tokenization and prompt execution with LLMs
Introduction to Generative AI for Software Testing
- Key LLM capabilities for test tasks
- AI chatbots vs. LLM-powered testing applications
- Interaction models and practical examples
Prompt Engineering for Effective Software Testing
- Structure of prompts: role, context, instruction, input data, constraints, output format
- Core prompting techniques: prompt chaining, few-shot, meta prompting
- System vs. user prompts
- Hands-on: Analyze and create structured prompts; identify prompting techniques
- Test analysis, design, implementation, regression, monitoring, and control with GenAI
- Choosing appropriate prompting techniques for different test tasks
- Hands-on: Multimodal prompting, prompt chaining, few-shot prompting, prioritizing test cases
- Metrics for evaluating GenAI results: accuracy, precision, recall, relevance, diversity, execution success, time efficiency
- Techniques for iterative prompt refinement: A/B testing, output analysis, user feedback
Managing Risks of Generative AI in Software Testing
- Hallucinations, reasoning errors, biases: identification and mitigation
- Data privacy and security risks: vulnerabilities, attack vectors, mitigation strategies
- Environmental impact: energy consumption, CO‚‚ emissions
- AI regulations, standards, and best practices (ISO/IEC 42001, EU AI Act, NIST AI RMF)
LLM-Powered Test Infrastructure for Software Testing
- Architectural components: front-end, back-end, LLM integration
- Retrieval-Augmented Generation (RAG)
- LLM-powered agents and automation
- Fine-tuning LLMs and SLMs for test tasks
- LLMOps: deployment and management
Deploying and Integrating Generative AI in Test organizations
- Roadmap for GenAI adoption: risks of shadow AI, strategy, LLM/SLM selection, cost estimation, adoption phases
- Change management: essential skills, building GenAI capabilities, evolving test processes and roles
Exams and assessments
Your course fee includes an iSQI voucher for the examination which you will book at a later date.
- The format of the exam is multiple choice.
- Exam duration is 60 minutes. If the candidate’s native language is not the examination language, the candidate is allowed an additional 25% (exam duration = 75 minutes).
- There are 40 questions.
- To pass the exam, at least 65% of the total sum of points must be answered correctly.
Hands-on learning
This course includes the following hands-on exercises:
- HO-1.1.2 Practice tokenization and token count evaluation when using an LLM for a software test task
- HO-1.1.4 Write and execute a prompt for a multimodal LLM using both textual and image inputs for a software test task
- HO-2.1.1 Observe several given prompts for software test tasks, identifying the components of role, context, instruction, input data, constraints and output format in each
- HO-2.1.2a Observe demonstrations of prompt chaining, few-shot prompting, and meta prompting applied to software test tasks
- HO-2.1.2b Identify which prompt engineering techniques are being used in given examples
- HO-2.2.1a Practice multimodal prompting to generate acceptance criteria for a user story based on a GUI wireframe
- HO-2.2.1b Practice prompt chaining and human verification to progressively analyze a given user story and refine acceptance criteria
- HO-2.2.2a Practice functional test case generation from user stories with AI using prompt chaining, structured prompts and meta-prompting
- HO-2.2.2b Use few-shot prompting technique to generate Gherkin style test conditions and test cases from user stories
- HO-2.2.2c Use prompt chaining to prioritize test cases within a given test suite, taking into account their specific priorities and dependencies
- HO-2.2.3a Practice few-shot prompting to create and manage keyword-driven test scripts
- HO-2.2.3b Practice structured prompt engineering for test report analysis
- HO-2.2.4 Observe test monitoring metrics prepared by AI from test data
- HO-2.2.5 Selecting Context-Appropriate Prompting Techniques for Given Test Tasks
- HO-2.3.1 Observe how metrics can be used for evaluating the result of generative AI on a test task
- HO-2.3.2 Evaluate and optimize a prompt for a given test task
- HO-3.1.2a Experiment with hallucinations in testing with GenAI
- HO-3.1.2b Experiment with reasoning errors in testing with GenAI
- HO-3.2.3 Recognize data privacy and security risks in a given Generative AI for testing case study
- HO-3.3.1 Use a simulator to calculate the energy and CO‚‚ emissions for given test tasks with Generative AI
- HO-4.1.2 Experiment with Retrieval-Augmented Generation for a given test task
- HO-4.1.3 Observe how an LLM-powered agent assists in automating a repetitive test task
- HO-4.2.1 Observe an example of a fine-tuning process for a given test task and language model
- HO-5.1.3 Estimate the recurring costs of using Generative AI for a given test task
Hinta 2849 € +alv
Pidätämme oikeudet mahdollisiin muutoksiin ohjelmassa, kouluttajissa ja toteutusmuodossa.Â
Katso usein kysytyt kysymykset täältä.
