Regulating Modality Utilization within Multimodal Fusion Networks
dc.contributor.author | Singh, Saurav | |
dc.contributor.author | Saber, Eli | |
dc.contributor.author | Markopoulos, Panos P. | |
dc.contributor.author | Heard, Jamison | |
dc.date.accessioned | 2024-09-27T13:18:44Z | |
dc.date.available | 2024-09-27T13:18:44Z | |
dc.date.issued | 2024-09-19 | |
dc.date.updated | 2024-09-27T13:18:45Z | |
dc.description.abstract | Multimodal fusion networks play a pivotal role in leveraging diverse sources of information for enhanced machine learning applications in aerial imagery. However, current approaches often suffer from a bias towards certain modalities, diminishing the potential benefits of multimodal data. This paper addresses this issue by proposing a novel modality utilization-based training method for multimodal fusion networks. The method aims to guide the network’s utilization on its input modalities, ensuring a balanced integration of complementary information streams, effectively mitigating the overutilization of dominant modalities. The method is validated on multimodal aerial imagery classification and image segmentation tasks, effectively maintaining modality utilization within <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mrow><mo>±</mo><mn>10</mn><mo>%</mo></mrow></semantics></math></inline-formula> of the user-defined target utilization and demonstrating the versatility and efficacy of the proposed method across various applications. Furthermore, the study explores the robustness of the fusion networks against noise in input modalities, a crucial aspect in real-world scenarios. The method showcases better noise robustness by maintaining performance amidst environmental changes affecting different aerial imagery sensing modalities. The network trained with 75.0% EO utilization achieves significantly better accuracy (81.4%) in noisy conditions (noise variance = 0.12) compared to traditional training methods with 99.59% EO utilization (73.7%). Additionally, it maintains an average accuracy of 85.0% across different noise levels, outperforming the traditional method’s average accuracy of 81.9%. Overall, the proposed approach presents a significant step towards harnessing the full potential of multimodal data fusion in diverse machine learning applications such as robotics, healthcare, satellite imagery, and defense applications. | |
dc.description.department | Electrical and Computer Engineering | |
dc.description.department | Computer Science | |
dc.identifier | doi: 10.3390/s24186054 | |
dc.identifier.citation | Sensors 24 (18): 6054 (2024) | |
dc.identifier.uri | https://hdl.handle.net/20.500.12588/6631 | |
dc.title | Regulating Modality Utilization within Multimodal Fusion Networks |