zikele

zikele

人生如此自可乐

合成自适应引导嵌入(SAGE):一种新型知识蒸馏方法

2508.14783v1

中文标题#

合成自适应引导嵌入(SAGE):一种新型知识蒸馏方法

英文标题#

Synthetic Adaptive Guided Embeddings (SAGE): A Novel Knowledge Distillation Method

中文摘要#

模型蒸馏使知识从大规模模型转移到紧凑的学生模型成为可能,有助于在资源受限环境中部署。 然而,传统的蒸馏方法常常面临计算开销大和泛化能力有限的问题。 我们提出了一种新颖的自适应蒸馏框架,该框架在学生模型损失较高的区域动态增强训练数据。 使用基于 UMAP 的降维和最近邻采样,我们的方法识别嵌入空间中表现不佳的区域,并生成有针对性的合成示例以指导学生学习。 为了进一步提高效率,我们引入了一个轻量级的教师 - 学生接口,绕过教师的输入层,实现在向量化表示上的直接蒸馏。 在标准自然语言处理基准测试中的实验表明,我们的 66M 参数学生模型始终与现有基线相当或超越,分别在 QNLI 上达到 91.2%,在 SST-2 上达到 92.3%,同时训练的轮次更少。 这些结果突显了基于损失的数据增强和向量化蒸馏在高效且有效的模型压缩方面的潜力。

英文摘要#

Model distillation enables the transfer of knowledge from large-scale models to compact student models, facilitating deployment in resource-constrained environments. However, conventional distillation approaches often suffer from computational overhead and limited generalization. We propose a novel adaptive distillation framework that dynamically augments training data in regions of high student model loss. Using UMAP-based dimensionality reduction and nearest neighbor sampling, our method identifies underperforming regions in the embedding space and generates targeted synthetic examples to guide student learning. To further improve efficiency, we introduce a lightweight teacher-student interface that bypasses the teacher's input layer, enabling direct distillation on vectorized representations. Experiments across standard NLP benchmarks demonstrate that our 66M-parameter student model consistently matches or surpasses established baselines, achieving 91.2% on QNLI and 92.3% on SST-2, while training with fewer epochs. These results highlight the promise of loss-aware data augmentation and vectorized distillation for efficient and effective model compression.

文章页面#

合成自适应引导嵌入(SAGE):一种新型知识蒸馏方法

PDF 获取#

查看中文 PDF - 2508.14783v1

智能达人抖店二维码

抖音扫码查看更多精彩内容

加载中...
此文章数据所有权由区块链加密技术和智能合约保障仅归创作者所有。