Israeli and U.S. researchers have found that the human brain processes spoken language in a step-by-step sequence that mirrors the internal workings of advanced artificial intelligence, the Hebrew University of Jerusalem said in a statement on Sunday.
The findings, published in Nature Communications journal, suggest that despite their fundamental structural differences, the human brain and Large Language Models (LLMs) share a striking similarity in how they derive meaning.
Researchers from the Hebrew University of Jerusalem, alongside colleagues from Princeton University and Google Research, discovered that as the brain listens to speech, it translates words into meaning through a rapid series of neural steps. This progression unfolds over time in a pattern that directly aligns with how AI models process information through layers of depth.
The study found that early brain responses to speech correspond to the initial, shallow layers of an AI model, which focus on simple features. Later, brain activity matches the AI's deeper layers, where context, tone and complex meaning are synthesized.
This alignment was particularly evident in Broca's area, the brain's primary language center, where the strongest activity corresponded to the deepest, most advanced layers of the AI models.
The discovery challenges traditional linguistic theories that view language processing as a rigid, rule-based system. Instead, it supports a model where meaning emerges gradually from context.
To aid further study into how the brain deciphers natural speech, the team has released its full dataset of brain recordings and language features.

















































京公网安备 11010202009201号