Egocentric videos provide a unique perspective into individuals' daily experiences, yet their unstructured nature presents challenges for perception. In this paper, we introduce AMEGO, a novel approach aimed at enhancing the comprehension of very-long egocentric videos. Inspired by the human's ability to memorise information from a single watching, our method focuses on constructing self-contained representations from the egocentric video, capturing key locations and object interactions. This representation is semantic-free and facilitates multiple queries without the need to reprocess the entire visual content. Additionally, to evaluate our understanding of very-long egocentric videos, we introduce the new Active Memories Benchmark (AMB), composed of more than 20K of highly challenging visual queries from EPIC-KITCHENS. These queries cover different levels of video reasoning (sequencing, concurrency and temporal grounding) to assess detailed video understanding capabilities. We showcase improved performance of AMEGO on AMB, surpassing other video QA baselines by a substantial margin.
We Propose AMEGO - a representation of long videos. AMEGO breaks the video into Hand-Object Interaction (HOI) tracklets, and location segments. This forms a semantic-free memory of the video. AMEGO is built in an online fashion, eliminating the need to reprocess past frames.
Explore a sample of our 20.5K semantic-free VQA benchmark, arranged in 8 question templates.
We visually demonstrate how AMB is answered using the AMEGO representation through sample clips [scroll for different questions].
@inproceedings{goletto2024amego,
title={AMEGO: Active Memory from long EGOcentric videos},
author={Goletto, Gabriele and Nagarajan, Tushar and Averta, Giuseppe and Damen, Dima},
booktitle={European Conference on Computer Vision},
year={2024}
}