콘텐츠로 이동

ECE7115 Multimodal VLM (LLM)

인하대학교 · 2026년 봄학기

본 강의는 Multimodal VLM을 이해하기 위한 핵심 기반인 대규모 언어 모델(Large Language Model; LLM)을 심도 있게 다룹니다. Transformer부터 최신 LLM 모델 아키텍처, 학습/추론 파이프라인, GPU 시스템, RL 기반 사후 학습까지 LLM의 기초부터 최신 기술을 학습합니다. (강의 이름과 다르게 VLM 내용은 다루지 않습니다!)
참고: 본 강의는 Stanford CS336을 기반으로 구성되었습니다.

강의자: 안남혁 (인하대학교 전기전자공학부)

강의 스케쥴 & 자료

날짜 내용 Slides YouTube
3/2 No class (National holiday)
3/9 No class
3/16 Week 1. Introduction + Transformer
- Course introduction
- Resource accounting
- Transformer
0. Course Introduction
1. Resource accounting
2. Transformer
1. Course Introduction + Resource accounting
2. Transformer
3/23 Week 2. LLM Basics
- Pre-training
- Post-training
- Fine-tuning, Prompting
3. LLM Basics 3-1. LLM Basics (1)
3-2. LLM Basics (2)
3/30 Week 3. LLM Architecture (1)
- Modern LLM models
- Attention variants
4. Modern LLM Architecture 4-1. Modern LLM Architecture
4-2. Attention Variants
4/6 Week 4. LLM Architecture (2)
- Mixture-of-experts
- Scaling Laws
5. Mixture-of-Experts
6. Scaling Laws
5. Mixture-of-Experts
6. Scaling Laws
4/13 No class
4/20 Week 5. LLM Case Study
- Recent model architectures
7. LLM Case Study 7. LLM Case Study
4/27 Week 6. Understanding GPUs
- GPUs
- FlashAttention
8. Understanding GPUs 8. Understanding GPUs & FlashAttention
5/4 Week 7. Parallelism
- Multi-GPU/machine training
9. Parallelism
5/11 Week 8. Inference, Evaluation
- Inference cost & techniques
- Evaluation metrics
5/18 Week 9. Dataset, SFT
- Training dataset
- Supervised fine-tuning
5/25 No Class (National Holiday)
6/1 Week 10. RLHF
- Introduction to RL
- RL from human feedback
6/8 Week 11. Reasoning
- Training-free reasoning
- Training reasoning (RL with verifiable rewards)
6/15 Week 12. Tool & Agent, Case Study
- Tool use, multi-agent
- Case study on post-training