Dear authors,
I am writing to express my appreciation for your comprehensive and inspiring survey paper about knowledge distillation of LLMs!
I want to bring your attention to our recent paper titled "KnowTuning: Knowledge-aware Fine-tuning for Large Language Models".
In this work, we introduced KnowTuning, a method designed to explicitly and implicitly enhance the knowledge awareness of Large Language Models (LLMs). Based on teacher model GPT-4, we devise an explicit knowledge-aware generation stage to train LLMs to explicitly identify knowledge triples in answers. We also propose an implicit knowledge-aware comparison stage to train LLMs to implicitly distinguish between reliable and unreliable knowledge, in three aspects: completeness, factuality, and logicality.
I think our method is relevant to the discussion in your survey paper.
Once again, thank you for your excellent contribution to the field.
Best regards