We consider a long-standing yet hard and largely open machine learning problem of anomaly areas detection in multimodal 3D images. Purely data-driven methods often fail in such tasks because rarely incorporating domain-specific knowledge into the algorithm and do not fully utilize information from multiple modalities. We address these issues by proposing a novel framework with data fusion technology to leverage domain-specific knowledge and multimodal labeled data, as well as employ the power of randomized learning techniques. To demonstrate the proposed framework efficiency, we apply it to the challenging task of detecting subtle pathologies in MRI scans. A distinct feature of the resulting solution is that it explicitly incorporates evidence-based medical knowledge about pathologies into the feature maps. Our experiments show that the method is capable of achieving lesion detection in 71% of subjects by using just one such feature. Integrating information from all feature maps and data modalities enhances detection rate to 78%. Using stochastic configuration networks to initialize the weights of the classification model enables to increase precision metric by 18% as compared to deterministic approaches. This demonstrates the possibility and practical viability of building efficient and interpretable randomised algorithms for automated anomaly detection in complex multimodal data.