The dissertation addresses key challenges in federated learning (FL)—such as data heterogeneity, data scarcity, privacy protection, and communication efficiency—where organizations collaboratively train models without sharing raw data. It proposes four novel methods: generative data augmentation for heterogeneous features, parameter-efficient adaptation of vision-language models, evolutionary hyperparameter tuning under non-IID conditions, and personalization of diffusion models for one-shot FL. Together, these contributions improve model performance, efficiency, and privacy in real-world federated learning applications. (Shortened).
BibTeXKey: Che25