ºìÌÒÊÓƵ

Brain Wave

Psych majors dive into the mind-bending world of sensory substitution.

By Chris Lydgate ’90
December 2014 ºìÌÒÊÓƵ magazine cover

Cover of ºìÌÒÊÓƵ magazine, December, 2014
Photo by Clatyon Cotterell

Orestis Papaioannou ’15 takes a cotton swab, pokes it through a hole in a mesh-fabric cap and gently massages a dab of saline gel into the scalp of his experimental subject, an English major named Dan.

As he adjusts the electrodes in the cap, Orestis—who sports a neatly-trimmed beard, flip-flops, and a t-shirt proclaiming “Life of the Mind”—runs through the experimental protocol. “We’re going to show you some shapes and play some sounds at the same time,” he explains. “You’re going to learn how to translate sounds back into shapes.”

Dan takes a sip of black-tea lemonade, settles himself in a chair, and dons a pair of headphones while Orestis and Chris Graulty ’15 huddle around a computer monitor that displays the subject’s brainwaves, which roll across the screen in a series of undulations, jumping in unison when the subject blinks.

The students are investigating the brain’s ability to take information from one perceptual realm, such as sound, and transfer it to another, such as vision. This phenomenon—known as sensory substitution—might seem like a mere scientific curiosity. But in fact, it holds enormous potential for helping people overcome sensory deficits and has profound implications for our ideas about what perception really is.

drop

We’re sitting in ºìÌÒÊÓƵ’s Sensation, Cognition, Attention, Language, and Perception Lab—known as the SCALP Lab—and this time I’m the subject. My mouth is dry and my palms sweaty. Orestis does his best to help me relax. “Don’t worry,” he smiles. “It’s a very simple task.” Orestis is going to show me some shapes on the computer screen and play some sounds through the headphones. For the first hour, each shape will be paired with a sound. Then he will play me some new sounds, and my job will be to guess which shapes they go with. I enter the soundproof booth, sit down at the keyboard, slip on the headphones, and click “start.”

The first shapes look like the letters of an alien alphabet. Here’s a zero squeezed between a pair of reverse guillemets. At the same time, panning left to right, I hear a peculiar sound, like R2-D2 being strangled by a length of barbed wire. Next comes an elongated U with isosceles arms: I hear a mourning dove flying across Tokyo on NyQuil. Now a triplet of triangles howl like a swarm of mutant crickets swimming underwater.

To call the sounds perplexing would be a monumental understatement. They are sonic gibberish, as incomprehensible as Beethoven played backwards. But after an hour listening to the sounds and watching the shapes march past on the screen, something peculiar starts to happen. The sounds start to take on a sort of character. The caw of a demented parrot as a dump truck slams on the brakes? Strange as it seems, there’s something, well, squiggly about that sound. A marimba struck by a pair of beer bottles? I’m not sure why, but it’s squareish.

Now the experiment begins in earnest. I hear a sound; my job is to select which of five images fits it best. First comes a cuckoo clock tumbling downstairs—is that a square plus some squiggles or the symbol for mercury? The seconds tick away. I guess—wrong. On to the next sound: a buzz saw cutting through three sheets of galvanized tin. Was it the rocketship? Yes, weirdly, it was. And so it goes. After an hour, my forehead is slick with sweat and concentration. It feels like listening to my seven-year-old son read aloud—listening to him stumble over the same word again and again, except that now I’m the one who’s stumbling blindly through this Euclidean cacophony. And yet (swarm of crickets, three triangles) my guesses are slowly getting better. Something strange is happening to my brain. The sounds are making sense. 

I am learning to hear shapes.

drop

Orestis Papaioannou ’15, Chris Graulty ’15, and Phoebe Bauer ’15 probe the brain’s ability to extract visual information from the auditory channel in ºìÌÒÊÓƵ’s SCALP Lab. Photo by Clayton Cotterell.

Orestis Papaioannou ’15, Chris Graulty ’15, and Phoebe Bauer ’15 probe the brain’s ability to extract visual information from the auditory channel in ºìÌÒÊÓƵ’s SCALP Lab.

How it Works: Shapes to Sounds


Brain Wave

Subjects learned to hear the shapes after as little as two hours of training.

The system used at ºìÌÒÊÓƵ relies on the Meijer algorithm, developed by Dutch engineer Peter Meijer in 1992 to translate images into sounds. The vertical dimension of the image is coded into frequencies between 500 Hz-5000 Hz, where higher spatial position corresponds to higher pitch. The horizontal dimension is coded into a 500-ms left-to-right panning effect. The resulting sound—in theory—includes all the information contained in the image, but is meaningless to the untrained ear.

AUDIO: The ºìÌÒÊÓƵ team created 180 simple geometric shapes and transformed them into sounds. Click on each shape above and listen to the corresponding sound.

The interlocking problems of sensation and perception have fascinated philosophers for thousands of years. Democritus argued that there was only one sense—touch—and that all the others were modified forms of it (vision, for example, being caused by the physical impact of invisible particles upon the eye). Plato contended that there were many senses, including sight, smell, heat, cold, pleasure, pain, and fear. Aristotle, in De Anima, argued that there were exactly five senses—a doctrine that has dominated Western thought on the subject ever since. In Chaucer’s Canterbury Tales, for example, the Parson speaks of the “fyve wittes” of “sighte, herynge, smellynge, savorynge, and touchynge.” Even today, a cursory internet search yields scores of websites designed to teach children about the five senses.

The five-sense theory got a boost from psychological research that mapped the senses to specific regions of the brain. We now know, for example, that visual processing takes place primarily in the occipital lobe, with particular sub-regions responsible for detecting motion, color, and shape. There’s even an area that specializes in recognizing faces—if that part of the brain is injured by a bullet, for example, the subject will lose the ability to recognize a face, even his own.

But the Aristotelian notion that the senses are distinct and independent, like TV channels, each with its own “audience” sitting on a couch somewhere in the brain, is deeply flawed, according to Professor Enriqueta Canseco-Gonzalez [psychology 1992–].

For starters, it fails to account for the fact that our sense of taste is largely dependent on our sense of smell (try eating a cantaloupe with a stuffy nose). More important, it doesn’t explain why people with sensory deficits are often able to compensate for their loss through sensory substitution—recruiting one sensory system as a stand-in for another. The Braille system, for example, relies on a person’s ability to use their fingers to “read” a series of dots embossed on a page.

In addition, psychologists and neuroscientists have identified several senses that Aristotle didn’t count, such as the sense of balance (the vestibular system) and the sense of proprioception, which tells you where your arms and legs are without you having to look.

In truth, says Prof. Canseco-Gonzalez, our senses are more like shortwave radio stations whose signals drift into each other, sometimes amplifying, sometimes interfering. They experience metaphorical slippage. This sensory crosstalk is reflected in expressions that use words from one modality to describe phenomena in another. We sing the blues, level icy stares, make salty comments, do fuzzy math, wear hot pink, and complain that the movie left a bad taste in the mouth. It also crops up in the intriguing neurological condition known as synesthesia, when certain stimuli provoke sensations in more than one sensory channel. (Vladimir Nabokov, for example, associated each letter of the alphabet with a different color.)