This remarkable practice offers a profound window into the human brain’s capacity for language, challenging our often narrow, speech-centric view of communication.
More Than Just Gesture: Alternate Sign Languages
First, it’s crucial to distinguish these sign systems from the primary sign languages used by Deaf communities, such as American Sign Language (ASL) or British Sign Language (BSL). Primary sign languages are the native, principal languages for their users. They are fully formed languages, developed organically within Deaf communities, and are not based on a surrounding spoken language.
The system used by the Arandic peoples falls into a different category: an alternate sign language. These are sign languages developed within a hearing community that already has a shared spoken language. They act as a secondary, alternative system of communication. While found in various cultures around the world (for example, among some monastic orders under vows of silence), the sign languages of Indigenous Australia are among the most complex and lexicographically rich ever documented.
So why develop a whole second language? The reasons are deeply cultural. Among the Warlpiri, Kaytetye, and other Arandic groups, certain situations impose strict taboos on speech. The most significant is during periods of mourning. A widow, for example, may be forbidden from speaking for months, or even years. During this time, she is not rendered silent; she simply switches her primary mode of communication to sign. Similarly, speech is often avoided during hunting to maintain stealth or during certain sacred ceremonies. In these contexts, sign is not an impoverished substitute for speech—it is a complete and effective replacement.
Splitting the Message: How Hands and Mouth Work Together (and Apart)
The true genius of the Arandic system, meticulously documented by linguist Adam Kendon, lies in how the two modalities—hands and mouth—interact. This relationship isn’t fixed; it’s a dynamic, flexible dance where the speaker-signer can allocate meaning in fascinating ways.
We can think of the interaction in three main ways:
- Redundancy: Saying the Same Thing Twice. In many cases, the sign and the speech convey the same information. A person might say “The kangaroo is over there” while simultaneously signing KANGAROO + THERE. This can be for emphasis, to ensure clarity in a noisy environment, or simply as a default mode of communication for fluent users.
- Complementarity: Sharing the Informational Load. This is where things get more efficient. The hands provide one piece of the puzzle, and the mouth provides another. A speaker might say, “He went,” while their hands sign the specifics: “QUICKLY, TOWARDS THE RIVER.” The full meaning—”He went quickly towards the river”—is only understood by processing both channels. The hands might specify the “how” or “where,” while the mouth handles the “who” and “what.” This is a highly economical way to pack a lot of information into a single utterance.
- Divergence: Speaking and Signing Different Sentences. This is the most mind-bending aspect of the system. A person can literally speak one sentence while signing a completely different one. Imagine a scenario where you are speaking to a group, but want to pass a private message to a friend across the camp. You could say aloud, “Yes, I will bring the water over soon,” for everyone to hear. Simultaneously, you could sign to your friend, “THAT MAN IS ANNOYING” or “LET’S LEAVE AFTER THIS.” This allows for a “sub-channel” of communication, a public track and a private track running in parallel. It’s the ultimate form of social multitasking.
What This Teaches Us About the Brain
The existence of these dual-modal languages fundamentally challenges the idea that language resides in a single, isolated part of the brain. While neuroscientists have long identified Broca’s and Wernicke’s areas as crucial for speech production and comprehension, systems like the Arandic one show that the brain’s language faculty is not tied to a single modality.
Instead, it suggests the brain is an inherently multimodal processor. It’s perfectly equipped to manage and integrate information from different streams—auditory, visual, and kinetic—at the same time. The Arandic speaker-signer isn’t rapidly switching between two tasks; they are performing one integrated, albeit complex, communicative act.
This cognitive flexibility isn’t entirely alien to us. We all use gesture, facial expression, and tone of voice to complement or alter the meaning of our spoken words. Pointing while saying “it’s over there” is a simple form of complementarity. Winking while delivering a sarcastic comment is a form of divergence, creating a second layer of meaning. The Arandic system is simply a far more grammaticalized and powerful version of this universal human tendency.
It demonstrates that the conceptual work of forming a thought (“I want to convey this idea”) happens at a deeper level, before it is encoded into a specific motor output, whether it’s moving the lips and vocal cords or moving the hands and arms.
Beyond a Single Channel
The alternate sign languages of Central Australia are more than a linguistic curiosity. They are a testament to human ingenuity and the incredible plasticity of our minds. They show us that language is not a monolith, confined to sound waves and processed only by our ears. It is a rich, multidimensional tapestry woven from sight, sound, and movement.
In a world increasingly dominated by text on screens, the Arandic system is a powerful reminder of the embodied nature of communication. It pushes us to appreciate the subtle dance between our own hands and mouths and to marvel at the potential that lies dormant in our brains—the potential to speak, in every sense of the word, on more than one channel at once.