Natasha Mitchell: All in the Mind on ABC Radio National, hello from me Natasha Mitchell -- welcome. Now radio of course relies on sound but language as you'll hear doesn't. Not in the brain nor in the world.

Overseas recently I was waiting at a quiet country train station and on my right I had a group of western tourists having a very animated conversation and very loud. On my left though a couple were also having an animated conversation. The difference? They were totally silent. So today, the art and staggering science of sign language, with a stand-up comic, two scientists and a film-maker all living and working in a brilliantly bilingual world.

Hilari Scarl: Hello, I'm Hilari Scarl, I'm a documentary film maker from Los Angeles, California.

CJ Jones: Hello, I'm CJ Jones, actor. From Los Angeles, Hollywood!

Hilari Scarl: Now yesterday they taught us "whinging", so I had never heard that before. It said he was "whinging"; I was like, whinging...whinging....?

Natasha Mitchell: So what's the sign, you've got your hand...

Hilari Scarl: It's the same sign for complain, where you're sort of thumping your chest.

CJ Jones: As if you're coughing.

Hilari Scarl: As if you're coughing, right, with an open palm.

Natasha Mitchell: Yes, that's a distinctly Australian word.

Karen Emmorey: Well one of the things that sign languages tell us is about the drive for human language. So wherever deaf people come together, a language emerges. There's a lot of exciting work now looking at the birth of new languages that are looking at sign languages. We've been looking at sign languages to try to understand what's universal about all human languages, and to understand what are the determinants of brain organisation for language.

The more I learned about sign language, the more I realised that it really was a powerful tool to ask questions about the nature of human language and about the brain. And, it was also just really fun (laughs). So I mean I started taking sign language classes and we've always had deaf researchers and deaf scientists in the labs that I've worked in. To do sign language research you have to include deaf people in your research as scientists or as deaf research assistants, because otherwise you don't know what you're getting. You need someone who's expert in the language and in the community to help you do the research.

Natasha Mitchell: Linguist and neuroscientist Professor Karen Emmorey. More from her in a moment, including on the tip-of-the-tongue equivalent in sign language: it's that tip-of-the-finger feeling.

Performer CJ Jones lost most of his hearing after spinal meningitis at age seven. Both his parents were deaf, his six siblings hearing, which meant that sign language was his first language. But this was pre-civil rights St Louis in the USA and being black and deaf was a double whammy.

Unlike his parents, though, he made it to college, became a performer, runs a 24-hour internet TV channel showcasing deaf artists called SignWorldTV, and has run the first international Sign Language Theatre Festival.

From the film See What I'm Saying: Ladies and gentlemen, please welcome CJ Jones... music and crowd applause

CJ Jones: I'm very proud to be a deaf person. I'm very proud of my language. So when people say 'Oh, I'm sorry you're deaf,' I say 'there's nothing wrong with being deaf. I have good education, I'm fine. I drive a car, I marry, I produce children... hello!?'

Natasha Mitchell: Absolutely, I gather your dad used to tell you though that "you're better than the hearing". Was that helpful as a child to have your father say that?

CJ Jones: Yeah, because my dad was always warning me that "don't let hearing people put you down, don't let hearing people think you're stupid. No, you are above that, you're smart; you're capable of anything you want in life". So I stick it in my mind for all my life. But at the same time I always believed that I should be able to trust hearing people, work with them, whereas my dad is against hearing people, he distrusts them, hearing people because...

Natasha Mitchell: He was angry?

CJ Jones: Oh, he's very angry because they push him down and they put him to work as a janitor. He wanted to go to college and they said "no, you can't go to college because it's all for white people".

Natasha Mitchell: CJ's dad channelled some of this resentment into becoming a Golden Glove champion boxer. CJ stars in a new doco film called See What I'm Saying, which has just screened at The Other Film Festival in Australia which showcases cinema by, with and about people with a disability. It follows the lives of four deaf entertainers and is directed by Hilari Scarl, who's doing some of the sign language translation for our interview today in American Sign Language, ASL, totally different to AUSLAN in Australia by the way.

As an actor Hilari performed in dozens of theatre productions before she discovered the National Theatre of the Deaf 18 years ago, and everything changed.

Hilari Scarl: Yeah, sign language really changed me as a person too. There are some things that are very particular to deaf culture, like it's very particular to be blunt. It's important because communication is a highly valued thing in the deaf community, because so much is lost with them and a lot of hearing people are deciding what's important, like 'you don't need to know that', or 'what's so funny?... oh never mind'. So there's so much information that hearing people try to control. In deaf culture it's not considered rude to say, 'Oh, how much did your house cost? It's really nice.' Or, 'You look like you gained weight -- are you getting fat?' And a lot of hearing people when they're learning sign language they spend the first few months crying a lot, me included! laughs. And then you realise no, it's information, it's not necessarily anything personal.

Karen Emmorey: Well, it can get really as complex as any spoken language. So any complex lecture on philosophy of mind or particle physics can be expressed in sign language, just as it can be expressed in spoken language.

Natasha Mitchell: I'm in the laboratory for language and cognitive neuroscience now at San Diego State University, catching up with Director Karen Emmorey, a professor in the School of Speech Language and Hearing Sciences.

Karen Emmorey: But I think what's a little more fun I think to think about is language as art. For sign language is beautiful signed poetry, and it uses different techniques. So it uses visual techniques, rhyming that's based on hand shapes, rhyming is probably not the right word. But I think that's a good example of the subtleties, is when you look at sign language as art, not only can it express scientific information and it's got complex structures, you can have accents in sign language. And you have to pay attention to the face. There are subtle facial expressions that make a big difference in the syntax and if you miss those you don't understand the sentence.

But in addition, the subtleties of meaning are conveyed in metaphor, visual based poetry, they can use cinematic techniques of zooming in and zooming out, I mean it's just wonderful.

Natasha Mitchell: That's wild, I couldn't have imagined how poetry would work in sign language, but it's a whole gestural, body movement thing. Are some people better at it than others... in a sense it take a degree of body awareness and co-ordination that perhaps not all signers are going to have?

Karen Emmorey: Well exactly, it takes an artist. So in fact there are people in the deaf community known as 'master story tellers' and that is what their art is and their skill is, creating and telling narratives, both true stories but also invented stories. And there's now a number of DVDs of ASL literature, and that shows you how far we've gone also. Now in many universities you actually can take a course on ASL literature, and literature analysis in ASL.

Hilari Scarl: That was really how I got involved in the first place. I was an actor in New York and I had grown up seeing every kind of theatre possible, Shakespeare, Moliere, experimental theatre, Kabuki. And then the first time I was ever exposed to deaf theatre it blew my mind. I saw everything that was happening. They had a voicing actor speaking the lines similar to the idea of animation where it separates the picture from the voice, and the two working together I could watch the physical performance of the deaf actor. And so when they're describing the ripped wallpaper on the wall, the deaf actor is signing that and I know exactly what that wallpaper looks like and how big that rip was, you know, and her face expressed exactly how she felt about that rip. So much more than just a few words could express, and I instantly fell in love with the language and the poetry of that and then I started learning sign language and toured myself, then, with the National Theatre of the Deaf.

Natasha Mitchell: Your shows are incredibly physical; I mean how important is that to you as a comic, the physicality of the show? ...How do you use your body?

CJ Jones: Remember sign language comes with facial expression, body language, movement -- it's all incorporated together. So I developed my physical talent from there. For deaf people it comes naturally because we use sign language to communicate.

Natasha Mitchell: You have a solo show you do called What Are You - Deaf? There's a scene in it where you are yelled at by a car driver, another car driver and -- describe that scene, play that scene out for us.

(See the video of CJ's performance of this skit here and closer-up here.

CJ Jones: OK, here I go.

I was driving down on the freeway it was a beautiful day

Oh, all the birds were flying (Ew!)

And all the birds were singing (Tweet, tweet!)

And all the birds dropping (sound of bird poo!)

Hey you! (Squawk Squawk)

And the bird goes, ha ha ha!

(...tweet, tweet. twe...t)

And hits the wall. Ha I got you!

And I kept going.

Turned up the radio

And I look in the rear view mirror at the guy behind me was going (honk honk) and angry, "Hey you, what are you, deaf huh?"

"What are you calling me that for?" So I step on the gas. (Oh, by the way I have a Mercedes 500 SL, thank you very much)

I caught up with the car (vroooom)... automatic window (zzzzt).

'Hey you, what are you -- hearing?' Ha, ha, ha!

(Vroooom) Thank you very much.

Natasha Mitchell: I love it! laughs Do you perform the show differently when you have a predominantly hearing audience, compared to a predominantly signing audience?

CJ Jones: If I'm performing for a hearing audience then I have to think in English and signing is really tough.

Hilari Scarl: Signing and speaking simultaneously...

CJ Jones: It's not that easy. But I'm able to, the more I go with the flow I will be really able to deliver my jokes and the audience reaction is really good. If I'm mispronouncing anything and they miss it, that's difficult.

With a deaf audience I don't have to speak at all, I just sign, and it's my own first language. It's a huge difference. And sometimes I have an interpreter and she will interpret what I say. Often I will prefer to sign and speak because the punch line, the timing is so important, and if the interpreter doesn't know my jokes and she is behind it is not that great.

But I do have sound effects in my show a lot. [demonstrates...car screech, bird squawk etc]. I do a lot of that, and it really helps my expression, my voice, my signing...all in one...it's like a cartoon.

Natasha Mitchell: And we love that on radio.

CJ Jones: Oh cool.

Natasha Mitchell: CJ Jones and Hilari Scarl are my guests today on ABC Radio National's All in the Mind with me Natasha Mitchell. So sign languages are a rich realm for them as artists, as they are for scientists. Karen Emmorey and colleagues are studying them to investigate what language actually is in our brains, with really thought-provoking results.

Karen Emmorey: Because once they're recognised as natural human languages, then a world of exploration opens up. So you can then ask 'OK you have this language that you use your hands instead of your tongue, it's perceived visually instead of auditorially - does that change how the brain is organised for language?' You can look to see how does knowing a sign language affect cognition. Is language the way it is because it is spoken or because it's signed? Is the brain the way it is because it evolved for spoken language, or do we see similar regions involved in processing sign language?

Natasha Mitchell: That's great isn't it...Stephen Pinker says, describes language as 'the stuff of thought,' that it's so entwined with how we think. Let's come then to how we understand spoken language in the brain and what you've been revealing in your brain scan studies about how sign language operates differently, but also similarly, to spoken language. Because this is quite intriguing...sign language to the non-signer like myself seems to be a very visual language and yet it's engaging with spoken areas in the brain, isn't it. Tell us that story.

Karen Emmorey: Well there are two major classic language regions in the brain that have been known for 100 years. One is Broca's area, which is classically identified as a speech production area - involved in speech production. And the other is Wernicke's area classically involved in speech comprehension. Now Broca's area is just in front of the region that controls the lips and the tongue. So you might think well, it makes sense for spoken language for Broca's area to be right next to the area that controls the lips and the tongue. But of course sign languagers use the arm and the hand, so one question is, is Broca's area involved in sign language production since it's not near the areas that are controlling the linguistic articulators?

So we did a study where we asked signers to produce single signs while undergoing Positron Emission Tomography or PET scanning, where we can look at what areas of the brain are active when they are producing a sign, they just named pictures. And we compared that to a group of studies where English speakers were doing the same thing, naming pictures, but they were producing spoken English words. And we wanted to find out what areas of the brain were equally active for both sign and word production. And lo and behold, Broca's area popped out as equally active for both languages.

So what that tells us is the function of Broca's area isn't tied to the speech articulators and, despite the fact that there are really strong connections between Broca's area and auditory regions, that region still plays a critical role in the production of a visual and manual language.

And you find a similar story for Wernicke's area, the region that's involved in speech comprehension. That region is right next to auditory cortex. So you might think, well it takes on that function because of its proximity to the input system for spoken language -- the auditory system.

Natasha Mitchell: Sound, in other words, so speech.... that's really defined by speech being the vehicle through which we converse.

Karen Emmorey: Right, so we understand speech auditorally generally. So the question is of course the visual system, the visual cortex, is very far from auditory cortex, it's in the back of the brain, and sign languages are perceived visually, so do you see Wernicke's area active when signers are viewing visual language, sign language? And the answer across many studies is 'yes', you see activation in Wernicke's area when signers are processing a visual sign language. So again it's a language area.

Natasha Mitchell: Which must have surprised people, because we were so sort of attached to this relationship between sound and language.

Karen Emmorey: That's right. And also we looked at the actual anatomy of the auditory cortex in deaf signers, these were all congenitally deaf signers so they were born deaf. Do we see that the auditory cortex atrophies or shrinks, do we see a reduction in the volume of grey matter, of brain cells for deaf people compared to hearing people? And the answer is no, the amount of brain cells, that is the grey matter, was equal for deaf signers and hearing people.

It's telling us that those regions aren't necessarily tied just to auditory input so that they can respond to visual input.

Natasha Mitchell: So they're really primarily involved in this beast that we call language, and its primacy in our brain?

Karen Emmorey: Well that's right, I mean I think that's the exciting thing about sign language, is that it's telling us what's really important to the human brain and I think to humans in general, is language, not speech.

Natasha Mitchell: How did you come to be working as both a scientist who is deaf but also particularly focused on sign language?

Stephen McCullough [voice of Brenda Nicodemus]: Oh, I started getting interested through my interest in art and somehow when I was studying at UCLA I started to switch over to the scientific enquiry, and I got interested in the brain and how it functions.

Natasha Mitchell: That's the voice of American sign language researcher and interpreter Dr Brenda Nicodemus and across from me is neuroscientist Dr Stephen McCullough, using sign language. He's a long time collaborator of Karen Emmorey's at the State University of San Diego.

Stephen McCullough: Often we take for granted what we see around us, how the brain processes how we speak, how we communicate, how we process information and express ourselves. It's amazing really, and once I got started I just delved into it fully. Sign language is wonderful to study and I feel I can contribute my experience personally. What people tend to take for granted, hearing and deaf people - it's different - but we can work collaboratively as a team and challenge the assumptions that we have in our work and our experiences. And often we might make wrong assumptions, and we need to share other perspectives. So this is a scientific method and I enjoy that and have throughout my career.

Natasha Mitchell: The relationship between emotion, sign language and speech in the brain and the world is a key research interest of Stephen McCullough's.

Stephen McCullough: Eyebrows will go up or down, if people don't know sign language they may think people are showing surprise or emotion. Like I'll say 'Do you want to eat?' and my eyebrows go up, my eyes widen and people may take it for an emotional expression. You know, people don't necessarily separate grammatical versus emotional facial expressions if they're not familiar with it.

Natasha Mitchell: But your work is really revealing that the brain of native signers sees the two very differently.

Stephen McCullough: Exactly. So when you see another person expressing emotion, often it is activated in the right hemisphere. And also linguistic processing seems to be in the left hemisphere, as we know. So the interesting conundrum is for deaf people, people who observe facial expressions that have linguistic meaning, grammatical meaning. And let me give you an example if I can of adverbials, and adverbial facial expressions. For example I'm going to use the sign now for writing... 'to write'.

Natasha Mitchell: So you've actually got your hand out in front of you and you are in effect holding a pen and writing on your hand.

Stephen McCullough: Right, so my facial expression is like this, so my tongue is slightly protruding between my lips and now I'm pursing my lips and using the same verb... 'to write'. So the verb remains static, intact, but what's changing is the facial expression and they have completely different meanings. One meaning is that it's an everyday activity, the other one has a different meaning. So this is a simple example of how facial expressions are used in American Sign Language grammar. And so the question that I wanted to raise was what happens within the brain structures, how do people process this type of information? And so we did a narrow imaging study using fMRI to observe how people when viewing this sentence would react, what would be evoked in their brain.

Generally hearing people when they're watching emotional facial expressions do have right hemispheric stimulation, as we would anticipate. Deaf people however have activity on both hemispheres, and when deaf people are looking at facial expressions that are coupled with a sign they have strong activity in their left hemisphere. Whereas hearing people who are unfamiliar with sign language - sign naïve people - maintain right hemispheric stimulation.

Natasha Mitchell: So they, in a way, people who don't sign, aren't seeing the linguistic meaning in facial expressions, but deaf people are seeing both.

Stephen McCullough: Exactly. For hearing people who see linguistic facial expressions, really they don't have anything semantically to tie it to, it's just meaningless. It might just be tied to some imagined emotion or whatever. So they don't have that kind of processing in the left hemisphere.

Natasha Mitchell: Which indeed is the hemisphere so central to language.

Stephen McCullough: Exactly. I also want to talk about perception -- there's another study that we did with deaf people who have had a stroke, and what we found is that if the damage is in the left hemisphere they aren't able to express linguistic facial expressions, but their normal expressive emotional facial expressions are intact. So again it's showing the distinction between these two types of processing in the right and left hemisphere.

Natasha Mitchell: And this is crucial new information in the rehabilitation of deaf people after a stroke. Dr Stephen McCullough translated there by Dr Brenda Nicodemus. And catch the video of our interview on the All in the Mind YouTube channel and the blog.

He and Dr Karen Emmorey are also studying the brains of hearing people who both speak and sign fluently. And a bilingual brain isn't the same as two monolinguals in the one body -- and they're smart too. The details of that can be heard on the All in the Mind blog as extra audio this week.

CJ Jones is certainly witnessing bilingualism in his son, who became a natural signer as a baby before he could speak.

CJ Jones: Sign language is the number one. And then later they could learn to speak, they learn to write and be able to express better. For example I have a hearing son. He's three years old.

Natasha Mitchell: You have a hearing son. Age three..

CJ Jones: Ah ha, three years old. It's fascinating to watch him and he started learning sign language, that's his first language. I taught him and he communicated more than 800 signs by the time he was one years old. He knows a lot.

Natasha Mitchell: So sign language was his first language?

CJ Jones: Yes, it's amazing. His first language. Before he speaks. And by the time he is three years old he's talkative ( sound effects of his son, and he signs a lot. My wife is hearing, he would talk to her and would sign and talk to me laughs.

Natasha Mitchell: So he is truly... at three he is truly bilingual?

CJ Jones: Bilingual. Amazing.

Natasha Mitchell: It's quite brilliant isn't it? Because as a child you're a natural signer, that's the skill you have you don't have that verbal ability yet.

CJ Jones: Because verbally it takes time, but with sign it's all about pictures and you can express freely whatever you want. You can have your own sign language too!

Natasha Mitchell: Certainly it's only very recently really, just the last 10 or 20 years, 20 years maybe, that linguists took sign languages seriously as a pursuit to study and investigate -- it's recent isn't it?

Karen Emmorey: Relatively recent. So the first linguist to really start documenting sign languages as a language was Bill Stokey in 1960. And since then there's been a revolution not only within linguistics - so I think linguists were the first to realise 'oh, when we look at these languages we find the same kind of structures that we see in spoken language'. So even down to phonology. So for spoken languages you combine consonants and vowels to create new words -- well what about signs? They look just like holistic gestures, in fact there're some signs that look just like gestures.

But in fact when you look at linguistic structure you see that 'no', there is a level that's just form based, so you combine hand shapes, locations, movements to create signs. And some of the work we've done is to use some psycholinguistic evidence to look at that division between 'form' and 'meaning'. So again coming back to the signs that are really iconic - the sign for ball looks like you're holding a ball, the sign for hammer looks like you're actually hammering. So maybe in sign languages you just conflate meaning and form, they are not represented separately?

Well, if that's the case then signers shouldn't experience the equivalent of having something on the 'tip of your tongue'. When something is on the tip of your tongue you know the meaning, you've accessed that semiotics, you know who the person is you are looking for the name of, of what the thing is you want to name but you can't get the form. For sign, if meaning and form were the same once you got the meaning you would get the form. So we looked to see do you signers experience something called 'tip of the finger'? Do they ever have this sense of 'I know the sign I want but I can't get the form, I don't know the hand shape, I can't pull it out'. And you know we did a diary study where signers kept track of that, did they ever have that feeling... and yes, they did.

Natasha Mitchell: So they know... it's in the tip of their fingers literally. But it's the same thing going on in their head, I guess, they just can't express, or they've got a sort of inkling but they can't quite remember it.

Karen Emmorey: Right so one thing we found was that the rate of TOFs and TOTs was about the same.

Natasha Mitchell: 'Tip of the finger' and 'tip of the tongue'.

Karen Emmorey: Right. And also if you've ever had this experience yourself sometimes you know...

Natasha Mitchell: All the time I have 'tip of the tongue'. I thought it was just me and lapsing memory.

Karen Emmorey: Well sometimes when you're in that state you know what it starts with, you'll often say 'I know it starts with a B....or'. And so we wanted to find do you signers have the same kind of thing where they've gotten part of it. And so we elicited these tip of the finger experiences by presenting proper names that they had to then produce the sign for, because proper names are often when you have one of these tip of the tongue or tip of the fingers. And then when they were in that state they say 'OK I know what the sign for Scotland is but I can't..'. And we'd say 'OK well do you know the hand shape, do you know the movement?' And sure enough ...so, the sign for Scotland looks like you trace a plaid pattern on your shoulder.

Natasha Mitchell: Oh right.

Karen Emmorey: And what the signer said was they knew that sort of four hand shape, the four fingers extended, they knew the movement, but they weren't able to retrieve the location. And so it suggests that for both speakers and signers you're able to get the beginnings of words or signs.

Natasha Mitchell: How do you see this work translating into the deaf community?

Karen Emmorey: It's important for the deaf community in a couple of ways. One way is that it validates American Sign Language as a natural human language with the same complexities as a spoken language. And this information can then be used as arsenal for supporting the teaching of American Sign Language as a foreign language, using American Sign Language in the education of deaf children. So it provides the deaf community with arguments for why sign language should be accepted in these different venues.

It also shows -- I think the brain studies are particularly compelling that sign language is just like spoken language, not only just when you look at linguistic studies but the brain thinks it's the same as spoken language.

Song from soundtrack to 'See What I'm Saying'. View soundtrack with subtitles here :

Natasha Mitchell: Professor Karen Emmorey from San Diego State University. And there are pictures of my conversation with CJ Jones and Hilari Scarl here on the All in the Mind website: abc.net.au/rn/allinthemind. Your discussion welcome here too -- which went off after our climate change and the psyche show last week.

A full archive of transcripts there as well -- tell your friends in the deaf community about that, a great resource. And over on my blog lots of extra audio with guests this week including their view on cochlear implants.

Thanks today to Miyuki Jokiranta and Corinne Podger. I'm Natasha Mitchell. Catch you next week.