When we look at an object, light reflects off that surface, strikes our eyes, goes through our lens, triggers photo-receptors on our retinas, triggers an ion cascade through nerve fibres to our brain, branches through our limbic early warning system and up through to our occipital region to paint an hallucination of the object within our mind that doesn’t exist.
To understand what this image is, elements of it go to various parts of our brain, a network to identify the base category [table]. Once identified, another network of brain parts starts to inform you of basic properties [sturdy but not strong, need a coaster, not a weapon]. For anything that is within your catalogue of known items and fits closely enough to things you’ve seen before, this process is so quick and automatic that you don’t even consciously think about what it is that you see, you just know.
When we look at someone’s face, we are supposed to pick up the cues of pupil location, changes in eye shape, colouring of the cheeks, muscle configuration, amount of teeth shown, wrinkling of the brow, wrinkling of the nose, flare of the nose, twitching of certain muscles, activity of the ears, orientation of the head, stiffness of the neck and so much more – to try to figure out both if we know this person, and also what is their internal state.
Many studies have shown that the amount of brain activity needed to determine if you recognise a face and identify a person – and the amount of brain activity to determine what the internal state of that human is – is very high.
For most people, determining faces and that person’s internal state is considered a priority task. Humans are tricksy beasts, often hiding what they are truly feeling, masking their moods and deceiving others. Often this deception is for their own protection, but enough of the time, it is to take advantage of or harm another. In this era where most of the wildlife has been tamed or killed, the predators that are actually dangerous to us are humans.
Additionally, to work together as a team, it is important to know what your team is doing without having to explicitly be told, and it is helpful for them to know your state without you having to tell them. This non-verbal communication not only avoids cluing in the prey animal we are hunting, it allows for communication in a noisy environment, at a range, or in a hostile situation.
So it makes sense that humans evolved an internal brain network that is not only good at detecting faces to recognise them as friend or foe, but also to register the internal state and intent of those we see, so that we can either brace for attack, or work more effectively as a team on collaborative tasks. For most, it only takes a few key signals to quickly and efficiently determine someone’s mood and intent. Each person viewing another’s face will use a different combination of cues, but it is enough to get there. Consider how much of a table you need to see to guess accurately at the rest.
It is a great pity that many autistic people have not got this automatic process (some do). Often it is “yes that is a face, that seems happy?” much like you would for a table. The process doesn’t have that extra nuance of “happy with the food, upset at something else – from the stiffness of their partner, I’d say they have a disagreement”.
Determining a person’s internal state and intent can be learned as a manual skill. Like all acquired skills, it is slow and cumbersome at first, and in time can be improved to the point of being an automatic process. In this regard, it is similar to learning any manual skill, such as tennis, judo or driving. We were not born knowing these manual skills, but we can acquire them after birth from manual learning, and get so good at them that we can be very proficient. Some people, though, are only ever mediocre.
It is important to recognise that even those who hone the skills to an expert level are doing a non-natural task, which makes it more taxing to the brain system than automatic tasks like breathing, or beating your heart. Consider when you’ve pulled over your car, turned the radio down so that you can look at the map, or missed a turn because the conversation was too distracting. How often have you forgotten to beat your heart?
Manually learned tasks are inherently inefficient. Determining internal states and recognising faces is already a highly taxing mental task in those who have the evolved automatic process; for those who manually had to learn it, it is even more taxing.
For many autistic people, eye contact is hard. It takes a lot of processing to “read a person”. Due to a history of errors, there are a lot of features the autistic person will use to try to gauge and error-correct an impression. Even so, doubt will exist as the differences between earnest and honest expressive people, nuances and deceptive people is subtle. A deceptive person who is bad at it won’t get very far.
If the autistic person has got good at this task and isn’t too stressed or fatigued, they can fake eye contact fairly well. Even so, there are likely to be lots of calculations in the background of “am I making too much eye contact?”, “am I making too little eye contact?”, “what does that twitch mean?” and “I think I’ve missed something”. As stress and or fatigue rise, eye contact becomes harder. If the person hasn’t got good and efficient at this manual task, it was already hard to begin with.
This hasn’t even yet included adding ensuring that your own face is conveying “I’m not a threat” and enough of your own mood to help them help you, or not help them harm you.
Many western cultures run under the practice of “eye contact is honest”. Too little eye contact implies “deception / disrespect / uncertainty / subservience / inattentive etc”. Too much eye contact can be read as “invasive / challenge / threat”. It can be a systemic enculturated discrimination of autistic people.
Many autistic people describe the feeling of forcing themselves to make eye contact as “uncomfortable / a pressure behind my eyes / disorienting / dumbifying”. Each of these describes the increased cognitive load trying to process a face and how that can reduce the resources to do other things, like hear what the person is saying, think of a response, still stims and other movements etc.
When an autistic person stops looking at your face, but continues to talk to you, especially engaging in your conversation or talking about a subject they are passionate about, they are conveying to you respect, affection and enjoyment. They have likely stopped eye/face visual contact to give you more attention, because they want to understand you and what you are saying and meaning. You can help them by adding in verbal descriptions about how you feel about what they have said, and being more explicit in verbally stating your intent and checking that you have both understood the other.
Eye contact is overrated.