Hyppää sisältöön

Koulutus

Building LLM Apps with Prompt Engineering

Access expert-led QA training live online, wherever you learn best.

Ajankohta

10.7.2026

online

QA On-Line Virtual Centre

Ajankohta

10.7.2026

online

QA On-Line Virtual Centre

Overview

Large language models are transforming how organisations build products, automate workflows, and unlock value from data. We believe the future belongs to organisations that can learn, master, and apply AI capabilities at pace and scale. This workshop introduces modern prompt engineering techniques as the fastest path to building practical LLM-powered applications.

Learners will work with NVIDIA NIM, powered by the open-source Llama 3.1 large language model, alongside the LangChain library to structure and orchestrate LLM workflows. Through hands-on exercises, participants will build generative applications, document analysis pipelines, and chatbot assistants, while establishing the foundations required for more advanced techniques such as retrieval-augmented generation and parameter-efficient fine-tuning.

Prerequisites

Participants should have:

  • Familiarity with basic programming fundamentals such as functions, variables, and control flow
  • Experience writing simple scripts in a language such as Python
  • A general understanding of APIs and working with external libraries

Target audience

This course is designed for:

  • Developers and engineers looking to integrate LLMs into products or internal applications
  • Technical professionals exploring AI inference and generative AI use cases
  • Organisations seeking to build applied capability in AI, Cloud, and Data technologies

Objectives

By the end of this workshop, learners will be able to:

  • Explain the core principles of large language models and how prompt engineering influences model behaviour
  • Apply iterative prompt engineering best practices to improve output quality, reliability, and relevance
  • Use NVIDIA NIM to access and deploy LLM capabilities for inference-based applications
  • Design and implement structured LLM workflows using LangChain
  • Build application code for text generation, large-scale document analysis, and chatbot assistants
  • Describe how prompt engineering underpins advanced techniques such as retrieval-augmented generation and parameter-efficient fine-tuning
  • Evaluate LLM outputs and implement strategies to mitigate common risks such as hallucinations and prompt injection

Outline

Introduction to large language models and AI inference

  • Overview of large language models and transformer-based architectures
  • Understanding tokens, context windows, and inference
  • Common enterprise use cases for LLMs
  • The role of AI inference in production systems
  • Positioning LLMs within AI, Cloud, and Data strategies

Foundations of prompt engineering

  • What prompt engineering is and why it matters
  • Instruction design and task specification
  • Zero-shot, one-shot, and few-shot prompting
  • Structuring prompts for clarity, consistency, and control
  • Managing tone, format, and output constraints

Iterative prompt optimisation

  • Evaluating LLM outputs against task requirements
  • Techniques for refining and debugging prompts
  • Chain-of-thought and structured reasoning prompts
  • Using system and user messages effectively
  • Establishing repeatable prompt patterns for applications

Working with NVIDIA NIM and Llama 3.1

  • Overview of NVIDIA NIM architecture and capabilities
  • Accessing and configuring an NVIDIA language model NIM endpoint
  • Interacting with the Llama 3.1 model for inference tasks
  • Performance considerations and scaling inference workloads
  • Integrating NIM into application backends

Building LLM workflows with LangChain

  • Introduction to the LangChain framework
  • Creating prompt templates and reusable components
  • Managing memory and conversational state
  • Composing chains for multi-step reasoning tasks
  • Orchestrating tools and external data sources

Generative applications and document analysis

  • Designing text generation workflows for content and automation
  • Summarisation and information extraction from long documents
  • Chunking strategies and context management
  • Building pipelines for large-scale document processing
  • Validating and post-processing LLM outputs

Chatbot assistants and conversational systems

  • Designing conversational flows and system prompts
  • Managing dialogue context and user intent
  • Handling edge cases and ambiguous queries
  • Integrating chat interfaces with backend services
  • Monitoring and improving chatbot performance

Foundations for advanced LLM techniques

  • Introduction to retrieval-augmented generation
  • When to use retrieval versus fine-tuning
  • Overview of parameter-efficient fine-tuning concepts
  • Security considerations, including prompt injection risks
  • Governance, compliance, and responsible AI practices

Exams and assessments

Participants will complete practical coding exercises, guided labs, and knowledge checks throughout the workshop. A final applied assessment will require learners to design and implement an LLM-based application using NVIDIA NIM and LangChain.

Upon successful completion of the assessment, participants will receive an NVIDIA certificate recognising subject matter competency and supporting professional career growth.

Hands-on learning

This workshop is built around applied, hands-on learning:

  • Guided labs using NVIDIA NIM and Llama 3.1
  • Practical exercises building real LLM-powered features
  • Collaborative problem-solving scenarios based on enterprise use cases
  • Instructor feedback on prompt design and application architecture

Osta liput

QA’s online-courses from Tieturi

Questions about QA courses?

Find out how QA’s live online courses work, what you need to participate, and what to expect before booking your training.