Text Analysis and LLMs with Python – Blended Learning
Osallistumismuoto
Remote
Kesto
3 päivää
Hinta
2850 €
Blended Learning – the best of both ways to learn.
This course blends the flexibility of self-paced learning with the structure of live, instructor-led sessions. You'll learn from world-class industry experts and gain practical skills to drive meaningful results in your workplace. Our digital platform also empowers you to track your progress and manage your learning journey effectively.
This 3 day course provides an in-depth exploration of Large Language Models (LLMs) and their applications and covers key Generative AI skills expected of AI Engineers.
Designed for those with existing knowledge of python, python data packages, and desirably machine learning.
This instructor led course will cover essential topics such as LLM architectures, prompt engineering, model monitoring and observability, and working with both vendor-managed and open source models.
Additionally, the course introduces the use of agentic workflows and prompt evaluation techniques to help organisations effectively manage AI-driven processes.
These later topics are an excellent taster of our Building AI Agents with Python course for those progressing in their AI Engineering journey.
By the end of this course, learners will be able to:
- Identify and apply knowledge of Large Language Models to build Generative AI applications
- Interact with LLMs with advanced prompt engineering techniques
- Select and build with the right models and architectures for an AI application, including when RAG and fine-tuning are appropriate
- Evaluate, monitor, and optimize speed and performance in an AI system in deployment
Participants should have:
- Experience with base Python including collections and experience with common packages for data handling such as NumPy or Pandas.
- If participants do not yet have the expected prior knowledge this can be gained through QADHPYTHON Data Handling with Python
- Desirable experience with Machine Learning development processes
- If participants do not yet have this prior knowledge this can be gained through either QAIDSDP Introduction to Data Science for Data Professionals or QADSMLP Data Science and Machine Learning with Python.
Target audience
This course is designed for:
- Data Scientists
- Software Developers
- Machine Learning Engineers
- AI Engineers
- DevOps Engineers
01 Overview of Large Language Models
- Discuss the evolution and architecture of LLMs
- Differentiate between LLM architectures and their applications
- Apply basic prompt engineering techniques to interact with LLMs
02 Transformer Model Architecture
- Describe core transformer components including attention mechanisms
- Discuss model structure and recent architecture improvements
- Identify techniques to speed up generation, such as caching keys and values
- Investigate encoder only, decoder only, and encoder-decoder models – what type of tasks and applications they are used for
03 Tokens and Embeddings
- Discuss what tokens and embeddings are
- Describe different tokenization and embedding approaches
- Discuss the immense usefulness of embeddings
04 Using Pre-Trained Language Models
- Describe what a pre-trained language model is
- Describe different types of pre-trained models
- Consider the specific details for pre-trained models
- Apply a pre-trained model to a text classification task
05 Prompt Engineering
- Discuss why prompt structure and content is important
- Develop effective prompts to improve LLM responses
- Write prompts with different components and discover how they affect LLM responses
06 Advanced Text Generation Techniques and Tools
- Describe and implement agentic workflows for multi-step reasoning and task automation
- Discuss memory management and conversation handling in LLM applications
- Implement the architecture and components of RAG pipelines, including vector stores and retrievers
- Compare the pros and cons of agentic techniques in real-world scenarios
07 Training and Finetuning Language Models
- Describe how embedding models are trained and discuss techniques like contrastive learning
- Apply techniques to continue pre-training language models, such as masked language modelling
- Apply techniques for fine-tuning classification models, such as supervised fine-tuning and preference tuning
08 Evaluating, Deploying, and Observing Models
- Identify the metrics that can be used to evaluate LLMs and the difficulties in evaluating LLMs
- Outline best practices and typical operations when deploying models
- Discuss the importance of model monitoring and detecting drift – what to do when problems are identified
09 Techniques for Latency Reduction and Model Optimization
- Describe the trade-offs between model performance, size, and inference speed
- Identify the role of techniques such as quantization
- Apply quantization to optimize model performance
Exams and assessments
Learning outcomes are assessed through activities within this Instructor-Led course.
Delivery Method
This Blended Learning course consists of two key stages.
Self-Paced Learning
- Up to 1 hour, completed over a 4-week period prior to the live event.
- It is recommended that the self-paced learning is completed prior to joining the live event.
- It is recommended that learners have a minimum of 4 weeks between the course booking and the instructor-led live event to complete the necessary hours of learning.
- The self-paced learning is available 4 weeks prior to the live event and for 12 months following the live event.
Instructor-Led Live Event
- This course has a 3-day live event.
Hinta 2850 € +alv
Pidätämme oikeudet mahdollisiin muutoksiin ohjelmassa, kouluttajissa ja toteutusmuodossa.
Katso usein kysytyt kysymykset täältä.
