Multi-modal time series analysis has recently emerged as a prominent research area in data mining, driven by the increasing availability of diverse data modalities, such as text, images, and structured tabular data from real-world sources. However, effective analysis of multi-modal time series is hindered by data heterogeneity, modality gap, misalignment, and inherent noise. Recent advancements in multi-modal time series methods have exploited the multi-modal context via cross-modal interactions based on deep learning methods, significantly enhancing various downstream tasks. In this tutorial and survey, we present a systematic and up-to-date overview of multi-modal time series datasets and methods. We first state the existing challenges of multi-modal time series analysis and our motivations, with a brief introduction of preliminaries. Then, we summarize the general pipeline and categorize existing methods through a unified cross-modal interaction framework encompassing fusion, alignment, and transference at different levels (i.e., input, intermediate, output), where key concepts and ideas are highlighted. We also discuss the real-world applications of multi-modal analysis for both standard and spatial time series, tailored to general and specific domains. Finally, we discuss future research directions to help practitioners explore and exploit multi-modal time series. The up-to-date resources are provided in the GitHub repository
Time | Speaker | Title |
---|---|---|
1:00 pm - 1:10 pm | Haifeng Chen | Opening and Introduction |
1:10 pm - 1:40 pm | Zijie Pan | Multi-Modal Time Series Datasets |
1:40 pm - 2:40 pm | Dongjin Song | Taxonomy of Multi-Modal Time Series Methods |
2:40 pm - 3:00 pm | - | Break |
3:00 pm - 3:40 pm | Jinchao Ni | Applications of Multi-Modal Time Series Analysis |
3:40 pm - 4:00pm | Jinchao Ni | Future Directions |