[edit]
Learning to be Multimodal : Co-evolving Sensory Modalities and Sensor Properties
Proceedings of the 5th Conference on Robot Learning, PMLR 164:1782-1788, 2022.
Abstract
Making a single sensory modality precise and robust enough to get human-level performance and autonomy could be very expensive or intractable. Fusing information from multiple sensory modalities is promising – for example, recent works showed benefits from combining vision with haptic sensors or with audio data. Learning-based methods facilitate faster progress in this field by removing the need for manual feature engineering. However, the sensor properties and the choice of sensory modalities is still usually done manually. Our blue-sky view is that we could simulate/emulate sensors with various properties, then infer which properties and combinations of sensors yield the best learning outcomes. This view would incentivize the development of novel, affordable sensors that can make a noticeable impact on the performance, robustness and ease of training classifiers, models and policies for robotics. This would motivate making hardware that provides signals complementary to the existing ones. As a result: we can significantly expand the realm of applicability of the learning-based approaches.