gendering social robots image of small Care_O_Bot4

Gendering Social Robots: Voice

Voice


Male vs. Female voice genders a machine (Nass et al., 1997; Siegel, et al., 2009)
Childish, often seen as gender neutral (Obaid et al., 2016)
Ethnicity: African American, European American, Asian, etc.
Language and Accent (Andrist et al., 2015). English, for example,
     US—differs by region, slang, speech patterns, vocabulary
     UK—differs by class
     Australian
     South Asian—Hinglish
     Caribbean—not available in speech technology
     South African

Voices are full of cultural information. The choice of a robot voice is a major gender cue (Nass et al., 1997). Pitch of a voice indicates whether it is a male, female, or child. Lower voices carry more authority in Western culture. For example, Margaret Thatcher, the first woman prime minister in the U.K., sought training by a vocal coach from the National Theatre to lower her voice and make it more authoritative (Atkinson, 1984).

Numerous voices in assistive technology for the elderly are childish. Although we could find no analysis of this choice, a childish voice is culturally non-threatening (Obaid et al., 2016).

Voices also encode cultural information via accents. For English, virtual voices currently include the following accents in an adult male and female version: US, UK, Australian, and South Asian (Hinglish), but not the great varieties of Caribbean or South African (Knight, 2016). The US accent is typically not Southern or Texan.

 

 

double logo double logo double logo

このサイトはお茶の水女子大学ジェンダード・イノベーション研究所によって運営されています

 

TermsSite Map