Improving information exchange and interaction between sensors in both single and multi-agent systems
Sensors that process the same types of raw input are grouped into modalities. Combining information from sensors of different modalities is a challenging task, but one that can improve many areas of underwater robotics, including vehicle localization, and environment mapping.
The majority of AUV localization, perception, and mapping approaches are hand-designed for very specific sensor combinations. We aim to develop generalized algorithms and methods that will allow for data from multiple modalities to be easily used in conjunction for the variety of tasks that AUVs are given.
Currently, we are seeking to make advancements in opti-acoustics, as well as between different sonar modalities.
These algorithms will enable improved estimation/navigation of single and multi-agent systems that are equipped with multiple differing sensor modalities.