Skip to content

ECE7115 Multimodal VLM (LLM)

Inha University Β· Spring 2026

This course provides an in-depth study of Large Language Models (LLMs) β€” the essential foundation for understanding Multimodal Vision-Language Models. Topics include LLM architectures, training/inference pipelines, GPU systems, and post-training methods such as RLHF and RLVR. (but I will not provide VLM-related topcis, sorry!)
Note: This course is built upon Stanford CS336.

Instructor: Namhyuk Ahn (School of Electrical and Electronic Engineering, Inha University)

Schedule & Lecture Materials

Note: We provide lecture videos in Korean only.

λ‚ μ§œ λ‚΄μš© Slides YouTube
3/2 No class (National holiday)
3/9 No class
3/16 Week 1. Introduction + Transformer
- Course introduction
- Resource accounting
- Transformer
0. Course Introduction
1. Resource accounting
2. Transformer
1. Course Introduction + Resource accounting
2. Transformer
3/23 Week 2. LLM Basics
- Pre-training
- Post-training
- Fine-tuning, Prompting
3. LLM Basics 3-1. LLM Basics (1)
3-2. LLM Basics (2)
3/30 Week 3. LLM Architecture (1)
- Modern LLM models
- Attention variants
4. Modern LLM Architecture 4-1. Modern LLM Architecture
4-2. Attention Variants
4/6 Week 4. LLM Architecture (2)
- Mixture-of-experts
- Scaling Laws
5. Mixture-of-Experts
6. Scaling Laws
5. Mixture-of-Experts
6. Scaling Laws
4/13 No class
4/20 Week 5. LLM Case Study
- Recent model architectures
7. LLM Case Study 7. LLM Case Study
4/27 Week 6. Understanding GPUs
- GPUs
- FlashAttention
8. Understanding GPUs 8. Understanding GPUs & FlashAttention
5/4 Week 7. Parallelism
- Multi-GPU/machine training
9. Parallelism
5/11 Week 8. Inference, Evaluation
- Inference cost & techniques
- Evaluation metrics
5/18 Week 9. Dataset, SFT
- Training dataset
- Supervised fine-tuning
5/25 No Class (National Holiday)
6/1 Week 10. RLHF
- Introduction to RL
- RL from human feedback
6/8 Week 11. Reasoning
- Training-free reasoning
- Training reasoning (RL with verifiable rewards)
6/15 Week 12. Tool & Agent, Case Study
- Tool use, multi-agent
- Case study on post-training