Projects

Why are face and object processing segregated in the human visual system in the first place?

I used convolutional neural networks (CNNs), that perform on par with human visual recognition, to test the hypothesis that we have cortical specializations for faces because the computations they implement (i.e., face recognition) cannot be achieved by cortical machinery engaged in a more generic task (i.e., object categorization). Indeed, I found that networks trained simultaneously on face and object recognition performed worse than separate networks trained on just one of these tasks, suggesting that the brain needs to segregate these tasks to avoid a cost (Dobs et al., CCN, 2019)

Computational reasons for specialization? CNNs trained on faces perform not as well on objects, and vice versa.

How do perceptual computations unfold over time?

Using magnetoencephalography (MEG), I measured the time course of neural responses to face images varying in different face dimensions (e.g., gender, identity). I found that extraction of gender and age information begins before identity information, and that even earliest stages of identity information were enhanced for familiar faces (Dobs et al., Nat. Comm., 2019). These findings help reveal the sequence of processing steps in face perception and place important constraints on computational models of face perception that I am currently testing in CNNs.

How face perception unfolds over time. Different face dimensions start to be extracted at different time points.

What information is conveyed by facial motion and how do we use it?

I found that humans are highly sensitive to facial motion, which contains information about identity, over and above that conveyed by static form (Dobs et al., Vision Res., 2014; Sci. Rep., 2016). In the first application of optimal cue integration models to visual recognition, I showed that humans integrated facial form and motion optimally (i.e., using their reliability; Dobs et al., Sci. Rep., 2018) and I tested how these cues are processed in the brain (Dobs et al., NeuroImage, 2018).

Cue integration of facial form and motion during face recognition can be predicted by an optimal model.