Jung Kwon Hwan's Projects
Deep learning framework to train, deploy, and ship AI products Lightning fast.
colossal + lightning = llm trainer
List of Python API Wrappers and Libraries
Implementation of the LLaMA language model based on nanoGPT. Supports flash attention, Int8 and GPTQ 4bit quantization, LoRA and LLaMA-Adapter fine-tuning, pre-training. Apache 2.0-licensed.
Inference code for LLaMA models
Examples and recipes for Llama 2 model
The official Meta Llama 3 GitHub site
A quick guide (especially) for trending instruction finetuning datasets
A curated list of practical guide resources of LLMs (LLMs Tree, Examples, Papers)
ACL'2021: LM-BFF: Better Few-shot Fine-tuning of Language Models
Mastering-Python-Design-Patterns-Second-Edition, published by Packt
Medical question and answer dataset gathered from the web.
An evolving list of electronic media data sets used to model mental-health status.
notebooks for machine learning
Mobile Emotion Classification
natural language processing notebooks
Paper reading notes from Kakao Brain's NLP team.
hello server
Open Korean NLP Dataset Curation for the Users All Around the Globe
Examples and guides for using the OpenAI API
A novel method to tune language models. Codes and datasets for paper ``GPT understands, too''.
An optimized prompt tuning strategy comparable to fine-tuning across model scales and tasks.
Parallel dataset of Korean Questions and Commands
A framework for training and evaluating AI models on a variety of openly available dialogue datasets.
This repository contains the code for "Exploiting Cloze Questions for Few-Shot Text Classification and Natural Language Inference"
PORORO: Platform Of neuRal mOdels for natuRal language prOcessing
profile markdown repository
Prefix-Tuning: Optimizing Continuous Prompts for Generation
Are you an early π€ or a night π¦? Let's check out in gist
Prompt-BERT: Prompt makes BERT Better at Sentence Embeddings