AI

Accurate Knowledge Distillation via N-best Reranking

1 Mins read

We propose utilizing n-best reranking to enhance Sequence-Level Knowledge Distillation (Kim and Rush, 2016) where we extract pseudo-labels for student model’s training data from top n-best hypotheses and leverage a diverse set of models with different inductive biases, objective functions or architectures, including some publicly-available large language models, to pick the highest-quality hypotheses as labels. The effectiveness of our proposal is validated through experiments on the WMT’21 German ↔ English and Chinese ↔ English translation tasks. Our results demonstrate that utilizing pseudo-labels generated by our n-best reranker leads to a significantly more accurate student model. In fact, our best student model achieves comparable accuracy to a large translation model from (Tran et al., 2021) with 4.7 billion parameters, while having two orders of magnitude fewer parameters.


Source link

Related posts
AI

PredBench: A Comprehensive AI Benchmark for Evaluating 12 Spatio-Temporal Prediction Methods Across 15 Diverse Datasets with Multi-Dimensional Analysis

3 Mins read
Spatiotemporal prediction is a critical area of research in computer vision and artificial intelligence. It leverages historical data to predict future events….
AI

NVIDIA Researchers Introduce Flextron: A Network Architecture and Post-Training Model Optimization Framework Supporting Flexible AI Model Deployment

3 Mins read
Large language models (LLMs) such as GPT-3 and Llama-2 have made significant strides in understanding and generating human language. These models boast…
AI

Whispering Experts: Toxicity Mitigation in Pre-trained Language Models by Dampening Expert Neurons

1 Mins read
An important issue with Large Language Models (LLMs) is their undesired ability to generate toxic language. In this work, we show that…

 

 

Leave a Reply

Your email address will not be published. Required fields are marked *