Adaptive Knowledge Transfer between Deep Neural Networks and Large Language Models for Cross-Domain Tasks

Main Article Content

Aldric Penwell

Abstract

Deep neural networks (DNNs) have achieved remarkable performance across multiple domains, yet their adaptability to new environments remains constrained by distributional shifts and limited labeled data. In contrast, large language models (LLMs) demonstrate strong generalization and emergent reasoning capabilities, offering a new perspective on knowledge transfer. This paper proposes an adaptive knowledge transfer framework that unifies deep learning and LLM paradigms for cross-domain tasks. The framework introduces a dual-stage adaptation process: (1) semantic embedding alignment via representation distillation from pre-trained LLMs to task-specific deep networks, and (2) adaptive fine-tuning using self-supervised cross-domain consistency loss. Through this hybrid mechanism, DNNs gain semantic priors and linguistic knowledge from LLMs while retaining efficiency on downstream vision, speech, and sensor tasks. We validate the approach on three cross-domain datasets involving text-vision and text-IoT scenarios. Experimental results show that the proposed framework outperforms baseline transfer learning and fine-tuning methods by 7.6% on average accuracy and reduces domain discrepancy measured by Maximum Mean Discrepancy (MMD) by 12%. This study provides a systematic pathway for bridging the representational gap between DNNs and LLMs, highlighting how large-scale language pretraining can serve as a universal semantic adapter for diverse modalities.

Article Details

Section

Articles