Behavioral signatures of temporal context retrieval during continuous recognition
Atharva Joshi,Kamalakar Dadi,Vishnu Sreekumar
Annual meeting of the Cognitive Science Society., CogSci, 2025
@inproceedings{bib_Beha_2025, AUTHOR = {Atharva Joshi, Kamalakar Dadi, Vishnu Sreekumar}, TITLE = {Behavioral signatures of temporal context retrieval during continuous recognition}, BOOKTITLE = {Annual meeting of the Cognitive Science Society.}. YEAR = {2025}}
An influential mathematical model of memory, the temporal context model (TCM), posits that we encode items and their associations with temporal context (Howard & Kahana, 2002). Temporal context is conceived of as a recency-weighted average of past experiences. Critically, the model assumes that
when an item is retrieved later, the associated temporal context is also obligatorily retrieved. Existing evidence for the idea of retrieved temporal context primarily comes from free-recall studies. However, free recall introduces some critical confounds that are difficult to resolve (Folkerts et al., 2018) and also encourages memory strategies that may mimic temporal context effects (Hintzman, 2011). Schwartz et al. (2005) showed that temporal context influences image recognition within short experimental lists. We extend this by using the Natural Scenes Dataset (NSD) to demonstrate that reinstating temporal context
enhances recognition accuracy even across long timescales. We demonstrate that when the relevant temporal context is successfully reinstated for a reference image, the recognition accuracy of subsequent images increases. Critically, we show
that this influence falls off with temporal distance at encoding from the reference image only when the temporal context is successfully retrieved, as predicted by TCM. Furthermore, the slope of this temporal gradient increases as a function of the strength of the influence of the retrieved temporal context. These findings extend our understanding of temporal context
effects in episodic memory by showing that temporal context is retrieved even in tasks that do not encourage linking between items as a memory strategy.
Seeing Eye to AI Comparing Human Gaze and Model Attention in Video Memorability
Prajneya Kumar,Eshika Khandelwal,Makarand Tapaswi,Vishnu Sreekumar
Winter Conference on Applications of Computer Vision, WACV, 2025
@inproceedings{bib_Seei_2025, AUTHOR = {Prajneya Kumar, Eshika Khandelwal, Makarand Tapaswi, Vishnu Sreekumar}, TITLE = {Seeing Eye to AI Comparing Human Gaze and Model Attention in Video Memorability}, BOOKTITLE = {Winter Conference on Applications of Computer Vision}. YEAR = {2025}}
Understanding what makes a video memorable has important applications in advertising or education technology. Towards this goal, we investigate spatio-temporal attention mechanisms underlying video memorability. Different from previous works that fuse multiple features, we adopt a simple CNN+Transformer architecture that enables analysis of spatio-temporal attention while matching state-of-the-art (SoTA) performance on video memorability prediction. We compare model attention against human gaze fixations collected through a small-scale eye-tracking study where humans perform the video memory task. We uncover the following insights: (i) Quantitative saliency metrics show that our model, trained only to predict a memorability score, exhibits similar spatial attention patterns to human gaze, especially for more memorable videos. (ii) The model assigns greater importance to initial frames in a video, mimicking human attention patterns. (iii) Panoptic segmentation reveals that both (model and humans) assign a greater share of attention to things and less attention to stuff as compared to their occurrence probability.
Examining dependencies among different time scales in episodic memory—an experience sampling study
Hyungwook Yim,Paul M. Garrett,Megan Baker,Jaehyuk Cha,Vishnu Sreekumar,Simon J. Dennis
Frontiers in psychology, FP, 2024
@inproceedings{bib_Exam_2024, AUTHOR = {Hyungwook Yim, Paul M. Garrett, Megan Baker, Jaehyuk Cha, Vishnu Sreekumar, Simon J. Dennis}, TITLE = {Examining dependencies among different time scales in episodic memory—an experience sampling study}, BOOKTITLE = {Frontiers in psychology}. YEAR = {2024}}
We re-examined whether different time scales such as week, day of week, and hour of day are independently used during memory retrieval as has been previously argued (i.e., independence of scales). To overcome the limitations of previous studies, we used experience sampling technology to obtain test stimuli that have higher ecological validity. We also used pointwise mutual information to directly calculate the degree of dependency between time scales in a formal way. Participants were provided with a smartphone and were asked to wear it around their neck for two weeks, which was equipped with an app that automatically collected time, images, GPS, audio and accelerometry. After a one-week retention interval, participants were presented with an image that was captured during their data collection phase, and were tested on their memory of when the event happened (i.e., week, day of week, and hour). We find that, in contrast to previous arguments, memories of different time scales were not retrieved independently. Moreover, through rendering recurrence plots of the images that the participants collected, we provide evidence the dependency may have originated from the repetitive events that the participants encountered in their daily life.
Towards an ecologically valid naturalistic cognitive neuroscience of memory and event cognition
R Pooja,Pritha Ghosh,Vishnu Sreekumar
Neuropsychologia, NPS, 2024
@inproceedings{bib_Towa_2024, AUTHOR = {R Pooja, Pritha Ghosh, Vishnu Sreekumar}, TITLE = {Towards an ecologically valid naturalistic cognitive neuroscience of memory and event cognition}, BOOKTITLE = {Neuropsychologia}. YEAR = {2024}}
The landscape of human memory and event cognition research has witnessed a transformative journey toward the use of naturalistic contexts and tasks. In this review, we track this progression from abrupt, artificial stimuli used in extensively controlled laboratory experiments to more naturalistic tasks and stimuli that present a more faithful representation of the real world. We argue that in order to improve ecological validity, naturalistic study designs must consider the complexity of the cognitive phenomenon being studied. Then, we review the current state of “naturalistic” event segmentation studies and critically assess frequently employed movie stimuli. We evaluate recently developed tools like lifelogging and other extended reality technologies to help address the challenges we identified with existing naturalistic approaches. We conclude by offering some guidelines that can be used to design ecologically valid cognitive neuroscience studies of memory and event cognition.
From Sound To Meaning In The Auditory Cortex: A Neuronal Representation And Classification Analysis
Kumar Neelabh,Vishnu Sreekumar
Annual Conference of the International Speech Communication Association, INTERSPEECH, 2024
@inproceedings{bib_From_2024, AUTHOR = {Kumar Neelabh, Vishnu Sreekumar}, TITLE = {From Sound To Meaning In The Auditory Cortex: A Neuronal Representation And Classification Analysis}, BOOKTITLE = {Annual Conference of the International Speech Communication Association}. YEAR = {2024}}
The neural mechanisms underlying the comprehension of meaningful sounds are yet to be fully understood. While previous research has shown that the auditory cortex can classify auditory stimuli into distinct semantic categories, the specific contributions of the primary (A1) and the secondary auditory cortex (A2) to this process are not well understood. We used songbirds as a model species, and analyzed their neural responses as they listened to their entire vocal repertoire ((sim )10 types of vocalizations). We first demonstrate that the distances between the call types in the neural representation spaces of A1 and A2 are correlated with their respective distances in the acoustic feature space. Then, we show that while the neural activity in both A1 and A2 is equally informative of the acoustic category of the vocalizations, A2 is significantly more informative of the semantic category of those vocalizations. Additionally, we show that the semantic categories are more separated in A2. These findings suggest that as the incoming signal moves downstream within the auditory cortex, its acoustic information is preserved, whereas its semantic information is enhanced.