We are interested in developing non-English LLMs and transferring knowledge from large language models to smaller ones.
- Development of Solar-based Self-Introduction Correction LLM Model using SFT
2024.02 ~ 2024.03
- Development of Gemma 2B Korean Pre-trained Model
2024.04 ~ 2024.07
- Application of Quantization to LLM Model using llama.cpp
2024.05 ~ 2024.05
- Development of Llama3-8b Model using Chat-Vector and Orpo
2024.05 ~ 2024.06
- Development of Korean LLAVA Model using Chat-Vector
2024.06 ~ 2024.07
- Development of Korean Financial LLM Leaderboard
2024.07 ~ 2024.08
- Machine Learning / Deep Learning
- LLM Pretrain / Fine-Tuning
- Quantization / Knowledge Distillation
- ๐ 2023 ์ธ๊ณต์ง๋ฅ์ฝํ ์ธ ์ตํฉ์ฐฝ์๋ฉ AI ์ตํฉ ์ฝํ ์ธ ๊ณต๋ชจ์ (AI+ ์ฝํ ์ธ ๊ฒฐ๊ณผ ๋ถ๋ฌธ) - 1nd placed [overview]