Koulutus
Overview
Recent advances in large language models have created unprecedented opportunities for organisations to streamline operations, reduce costs, and improve productivity at scale. We believe organisations of the future will combine human and machine intelligence to learn, master, and apply AI capabilities quickly and effectively.
This course provides a comprehensive, practical introduction to LLM application development using the open-source ecosystem. Learners explore pretrained models from the Hugging Face repository, work directly with the Transformers API, and build task-specific and generative solutions. The course progresses from Transformer fundamentals to multimodal architectures and agentic orchestration using LangChain, equipping participants to design safe, scalable, and enterprise-ready LLM-powered applications.
Prerequisites
Participants should have:
- Experience with Python programming and working with external libraries
- A foundational understanding of machine learning and neural networks
- Familiarity with basic natural language processing concepts
- Awareness of APIs and model inference workflows
Target audience
This course is designed for:
- Developers building LLM-powered enterprise applications
- Data scientists expanding into generative AI and multimodal systems
- Machine learning engineers orchestrating LLM workflows
- Technical professionals seeking to integrate AI capabilities into products and services
Objectives
By the end of this course, learners will be able to:
- Navigate, evaluate, and experiment with models from the Hugging Face model repository
- Use the Transformers API to load, configure, and deploy pretrained LLMs
- Apply encoder-based models for tasks such as semantic analysis, embeddings, question answering, and zero-shot classification
- Work with decoder-style and encoder-decoder architectures for text generation and sequence-to-sequence tasks
- Integrate multimodal models to combine text, image, and audio inputs within unified workflows
- Design and guide generative AI solutions that are safe, effective, and scalable
- Use LangChain to orchestrate LLM pipelines, tools, and agentic workflows
- Incorporate inference and deployment strategies to support enterprise-scale applications
Outline
Course introduction
- Overview of course objectives, structure, and expected outcomes
- Introduction to the Hugging Face ecosystem and Transformers library
- Discussion of enterprise use cases for LLM-powered applications
- How LLMs enhance customer experience, automate workflows, and generate insight
Transformers and large language models
- Motivation for Transformer architectures from deep learning first principles
- Core components of Transformer-style architectures
- Tokenisation and text preprocessing
- Embeddings and vector representations
- Self-attention mechanisms and contextual learning
- Understanding input-output processing in LLMs
Task-specific pipelines with encoder models
- Profiling encoder-based models and their strengths
- Semantic analysis and embedding generation
- Question answering pipelines
- Zero-shot and few-shot classification
- Lightweight models for efficient inference
- Evaluating model performance and selecting appropriate architectures
Sequence-to-sequence and decoder-based models
- Introduction to decoder-style, GPT-like architectures
- Autoregressive text generation
- Prompt-based task conditioning
- Encoder-decoder models for machine translation and summarisation
- Few-shot task completion and controlled generation
- Managing output quality, format, and reliability
Multimodal architectures
- Integrating text, image, and audio data within LLM workflows
- Cross-modal learning concepts
- Using models such as CLIP for linking text and images
- Visual language models for image question answering
- Diffusion-style models for text-guided image generation
- Designing multimodal applications for enterprise scenarios
Scaling text generation and inference
- Understanding inference challenges in large language models
- Latency, throughput, and cost considerations
- Optimised model serving and server deployment strategies
- Scaling LLM applications to larger repositories and user bases
- Monitoring and maintaining production LLM systems
Orchestration and agentic workflows
- Introduction to LangChain for LLM orchestration
- Building modular, reusable LLM pipelines
- Tool integration and environment-enabled agents
- Agentic patterns for decision-making and task decomposition
- Integrating natural language interfaces with standard applications and data sources
- Governance, safety, and responsible AI considerations
Final assessment
- Design and build an LLM-based application integrating text generation, multimodal capabilities, and orchestration
- Apply encoder and decoder models appropriately within a single workflow
- Demonstrate safe and scalable application design principles
- Present and review solutions with instructor feedback
Exams and assessments
Learners complete practical exercises throughout the course to reinforce key concepts. The final assessment requires participants to build a functional LLM-powered application that integrates generation, multimodal learning, and orchestration techniques.
Assessment emphasises applied capability, architectural understanding, and responsible design of enterprise-ready AI systems.
Hands-on learning
This course is designed around applied experimentation and development:
- Direct interaction with pretrained models via the Hugging Face repository
- Implementation of encoder, decoder, and multimodal pipelines
- Guided exercises using the Transformers API
- Orchestration of agentic workflows with LangChain
- Iterative refinement of generative applications for safety and performance
Osta liput
QA’s online-courses from Tieturi
Questions about QA courses?
Find out how QA’s live online courses work, what you need to participate, and what to expect before booking your training.