For quite a while, many years, I would say, I have assumed speech to be a pretty level playing field factor for HCI (vis-a-vis human computer interface design), and much of the work Arun and I have done, even the “name” we used to do this work (Radiophony), implied “speech” as a deliverable.
Two interesting snippets makes me feel a rethink is worthwhile. As primates, the evolution of our collection of species is exceedingly communication-centric, and no species on earth has developed communication more complexly and richly than humans.
One is the well-known fact that American Sign Language, or variants of it, have been used to communicate with chimpanzees and other apes for years, some of whom displayed a stunning ability to form complex statements using this tool.
Signing is interesting, because it has had, developed around it, an entire ethos about inclusion wrt hearing challenges, vis-a-vis an opposing school of thought that seeks to drive inclusivity by teaching persons with severe hearing challenges to overcome those challenges using non-verbal communication, on the input side, and developing speech-appropriate muscle control (sans the actual speech feedback) on the output side. For obvious reasons, speaking children find it much easier to be ‘accepted’ in mainstream schools. In India, we have a pretty robust variation of ASL, called ISL, suited to sub-continental languages.
Still, that set of experiments appeared to be aimed more at establishing the intelligence of apes, rather than being about communication, although this may be a spin put on by the popular media.
The second, however, is more interesting. Consequent to the problems faced by a musician family with a child born nearly totally deaf (ie, deaf prior to language development), they developed signing within the family as a normal process, including the parent’s siblings and their children too. The children soon showed, even a tot in his first year, and an older child with cerebral palsy, that they were certainly capable of complex advanced ‘speech’ ie signing, long before their vocal chord muscle development was mature enough to form words verbally. This can be seen on YouTube – search for Two Little Hands. The parents market the TLH technique of early learning as an ongoing business.
Evidently, the nascent ability to communicate may be nurtured by the availability of appropriate tools of communication, and arguably, ‘learning issues’ are compounded by the rate of development of the human muscles and nerves necessary for speech. Of course, for evolutionary reasons, speech is the ‘conventionally’ favoured – indeed, the primarily favoured – means.
It seems to me that human communication can be effectively delivered using different tools other than the usual ones we find ‘normal’, such as speech and hearing. Arguably, just as we find it acceptable to use multiple senses (sight and touch) to supplement hearing in order to make for more meaningful, easier and more effective communication, perhaps we should consider development of a tool based on, for instance (but not necessarily limited to), signing, as a more inclusive approach to HCI than what we have had in the past. Current tools, ie interfaces, are almost completely based on mimicking traditional paradigms, primarily textual and verbal communication. At some point, such development is a blind alley, simply because it runs into a wall when persons with severe sensory challenges need smart assistive tools.
There are hints of more inclusive communication methodologies in popular literature and entertainment – mostly SciFi, naturally. For instance, in the film “Close Encounters of the Third Kind”, the communication between humans and aliens takes place using a combination of arrays of lights and multiple monotonal (ie “pure”) aural tones. Quite obviously, just as in human-human communication, multiple channel communication affords much greater and faster information transfer rates. Moving away from the traditional in favor of the improved could also be a pathway towards abandoning the interfaces that persons with highly advanced senses, particularly the autism spectrum affected, find obnoxious.
Of course, the recognition of the limitations of conventional wisdom, insofar as learning issues are concerned, is not new. In fact, the term “conventional wisdom” itself has a faintly pejorative tinge for precisely this reason: generally speaking, humans are smart enough to notice when hallowed traditions prove hollow. In the case of dyslexia, the inability to reliably identify text, verbal and audiovisual workarounds have long been acceptable as fairly effective bridges to inclusion. The same applies to other common forms of sense-related communication dysfunctionalities, such as alexia, dyscalculia and dysgraphia, that cause enormous damage to the learning process when inadequately accounted for during formal schooling.
The noted communicators Temple Grandin and Amanda Baggs, both of whom are identified as severely autism spectrum affected, show that they are not only capable of complex communication, but that they enjoy a quality of communication that is arguably far richer than those less sensorily endowed.
As human beings naturally limited by our current comprehension (and therefore acceptance) of the reach of our sensory organs, we do not find that using technological enhancements is a puzzle, or calls for a stretch of credibility. There is no rational reason why we should not view the acceptance of our ability to use such sense-enhancing tools as a stepping stone to developing communication itself, right from a very early age.