Advertisement
Advertisement
Businesswoman working online at the home office via laptop. Asian young entrepreneurs watching webinars and talking during meeting video conferences calls with team, and using mobile phone.
ATD Blog

Why Subtitles Often Hinder Learning

JH
Wednesday, February 28, 2024
Advertisement

If you’ve ever had several excited people simultaneously shout out responses to a posed question, then you know that human beings can only meaningfully comprehend one person speaking at a time. The reason for this concerns a particular neurological bottleneck.

In order to understand oral speech, the brain relies on the Wernicke/Broca Network: a small chain of cells processing the meaning of auditory words. Unfortunately, the brain only has one of these networks. This means we can only funnel one voice through this network at a time and comprehend only one speaker at a time: a neurological bottleneck.

Surprisingly, when we silently read, the Wernicke/Broca Network activates to the same extent as when we listen to someone speak. This means our brain processes the silent reading voice in the same manner it does a speaking voice. Accordingly, just as human beings can’t listen to two people speaking simultaneously, neither can we read while listening to someone speak simultaneously.

This is the basis for the “redundancy effect,” “cognitive load,” and other theories that have long demonstrated learning and memory decrease when students are presented with simultaneous text and speech elements.

This issue is highly relevant to on-screen captioning. When captions are present during a video narration, people tend to understand and remember less than people who watch the same video without captions. Even when captioning is identical to spoken narration, the bottleneck is activated.

Advertisement

With that said, there are several circumstances when combined captions and narration will not clash and can improve learning.

The first concerns people learning a new language. For the bottleneck to activate, both reading and listening comprehension skills must be fluent. When individuals are new to a language and not fluent in both (or either), captions can help them make better sense of narration they might otherwise miss.

Advertisement

The second concerns degraded or hard-to-understand speech. In some documentaries and video lessons, the audio quality is incredibly poor. This means viewers must expend a lot of cognitive energy simply deciphering the words spoken; this is cognitive energy not spent on deep comprehension or thought. In these instances, captions can ease the decoding of narration and boost learning.

The third concerns heavy accents. When a narrator or teacher has a heavy accent, this (again) forces viewers to expend much cognitive energy simply deciphering speech. Again, in this case, captions can ease decoding and boost memory and transfer.

In the end, once we recognize the underlying mechanism driving many cognitive or learning theories, many seemingly discrepant academic theories work themselves out. Very few research studies are at odds—they simply tap into different aspects of the same basic mechanisms.

For a deeper dive into the issue of transfer, join me at ATD24 International Conference & Expo for the session The Transfer Dilemma: Applying Skills Across Contexts.

JH
About the Author

Jared Cooney Horvath, PhD, MEd, is a neuroscientist, educator, and best-selling author. He has conducted research and lectured at Harvard University, Harvard Medical School, the University of Melbourne, and over 750 schools internationally. Jared has published 6 books, over 50 research articles, and has been featured in numerous popular publications, including The New Yorker, The Atlantic, The Economist, and PBS’s NOVA. He currently serves as Director of LME Global.