Zecheng Tang's Projects
Summarize all low-cost replication methods for Chatgpt.
๐งโ๐ซ 59 Implementations/tutorials of deep learning papers with side-by-side notes ๐; including transformers (original, xl, switch, feedback, vit, ...), optimizers (adam, adabelief, ...), gans(cyclegan, stylegan2, ...), ๐ฎ reinforcement learning (ppo, dqn), capsnet, distillation, ... ๐ง
A collection of resources and papers on Diffusion Models
A curated list of reinforcement learning with human feedback resources (continually updated)
This is the official code for ``Can Diffusion Model Achieve Better Performance in Text Generation? Bridging the Gap between Training and Inference!`` (ACL 2023 Findings)
A trend starts from "Chain of Thought Prompting Elicits Reasoning in Large Language Models".
Easily compute clip embeddings and build a clip retrieval system with them
My profile
PyTorch implementation of "LayoutTransformer: Layout Generation and Completion with Self-attention" to appear in ICCV 2021
[ICLR'23] DiffuSeq: Sequence to Sequence Text Generation with Diffusion Models
Diffusion-LM
Differentiable Vector Graphics Rasterization
The aim of this repository is to utilize LLaMA to reproduce and enhance the Stanford Alpaca
The release repo for "Vicuna: An Open Chatbot Impressing GPT-4"
Forward-Looking Active REtrieval-augmented generation (FLARE)
๐กGENIUS โ generating text using sketches! A strong and general textual data augmentation tool.
:zap: Dynamically generated stats for your github readmes
Reading list of Instruction-tuning. A trend starts from Natrural-Instruction (ACL 2022), FLAN (ICLR 2022) and T0 (ICLR 2022).
LaMini-LM: A Diverse Herd of Distilled Models from Large-Scale Instructions
High-Resolution Image Synthesis with Latent Diffusion Models
LayoutDM: Discrete Diffusion Model for Controllable Layout Generation [Inoue+, CVPR2023]
Layout Generation and Baseline implementations
[CVPR 2022 Oral] Towards Layer-wise Image Vectorization
LLM-Adapters: An Adapter Family for Parameter-Efficient Fine-Tuning of Large Language Models
This is the official code for ``Improving Temporal Generalization of Pre-trained Language Models with Lexical Semantic Change'' (EMNLP 2022 long)
Ongoing research training transformer language models at scale, including: BERT & GPT-2