AI

Efficient Source-Free Time-Series Adaptation via Parameter Subspace Disentanglement

1 Mins read

The growing demand for personalized and private on-device applications highlights the importance of source-free unsupervised domain adaptation (SFDA) methods, especially for time-series data, where individual differences produce large domain shifts. As sensor-embedded mobile devices become ubiquitous, optimizing SFDA methods for parameter utilization and data-sample efficiency in time-series contexts becomes crucial. Personalization in time series is necessary to accommodate the unique patterns and behaviors of individual users, enhancing the relevance and accuracy of the predictions. In this work, we introduce a novel paradigm for source-model preparation and target-side adaptation aimed at improving both parameter and sample efficiency during the target-side adaptation process. Our approach re-parameterizes source-model weights with Tucker-style decomposed factors during the source-model preparation phase. Then, at the time of target-side adaptation, only a subset of these decomposed factors is fine-tuned. This strategy not only enhances parameter efficiency, but also implicitly regularizes the adaptation process by constraining the model’s capacity, which is essential for personalization in diverse and dynamic time-series environments. Moreover, the proposed strategy achieves overall model compression and improves inference efficiency, making it highly suitable for resource-constrained devices. Extensive experiments on various time-series SFDA benchmark datasets demonstrate the effectiveness and efficiency of our approach, underscoring its potential for advancing personalized on-device time-series applications.


Source link

Related posts
AI

Dataset Decomposition: Faster LLM Training with Variable Sequence Length Curriculum

1 Mins read
Large language models (LLMs) are commonly trained on datasets consisting of fixed-length token sequences. These datasets are created by randomly concatenating documents…
AI

Transformation-Invariant Learning and Theoretical Guarantees for OOD Generalization

1 Mins read
Learning with identical train and test distributions has been extensively investigated both practically and theoretically. Much remains to be understood, however, in…
AI

Microsoft Research Introduces Reducio-DiT: Enhancing Video Generation Efficiency with Advanced Compression

3 Mins read
Recent advancements in video generation models have enabled the production of high-quality, realistic video clips. However, these models face challenges in scaling…

 

 

Leave a Reply

Your email address will not be published. Required fields are marked *