This article introduces a novel fog-platform-based methodology for controlled music generation and prediction, integrating distributed computing with transformer-based models. The proposed system leverages fog computing architecture to distribute processing tasks between edge devices and cloud servers, enabling real-time feature extraction while maintaining high musicality. By implementing MFCCs calculation and rhythm analysis at fog nodes close to data sources, we achieve significant latency reduction compared to pure cloud architectures. The cloud-based Transformer core then utilizes these pre-processed features for style-controlled music generation through its self-attention mechanisms. Experimental results demonstrate our hybrid approach not only maintains high style accuracy but also reduces upstream bandwidth consumption significantly, addressing critical challenges in IoT-enabled music generation scenarios. This research pioneers a viable pathway for deploying AI music systems in latency-sensitive environments through fog-cloud collaboration.