Cross-Cultural Study on Recognition of Emoticon’s shows that different cultures see emojis differently

Emoticons are getting more popular as the new communication channel to express feelings in online communication. Although familiarity to emoticons depends on cultures, how exposure matters in emotion recognition from emoticon is still open. To address this issue, we conducted a cross-cultural experimental study among Cameroon and Tanzania (hunter-gatherers, swidden farmers, pastoralists, and city dwellers) wherein people rarely experience emoticons and Japan wherein emoticons are popular. Emotional emoticons (e.g., ☺) as well as pictures of real faces were presented on a tablet device. The stimuli expressed a sad, neutral, or happy feeling. The participants rated the emotion of stimulus on a Sad–Happy Scale. We found that the emotion rating for the real faces was slightly different but similar among three cultural groups, which supported the “dialect” view of emotion recognition. Contrarily, while Japanese people were also sensitive to the emotion of emoticons, Cameroonian and Tanzanian people hardly read emotion from emoticons. These results suggested that the exposure to emoticons would shape the sensitivity to emotion recognition of emoticons, that is, ☺ does not necessarily look smiling to everyone.

Source: Is ☺ Smiling? Cross-Cultural Study on Recognition of Emoticon’s EmotionJournal of Cross-Cultural Psychology – Kohske Takahashi, Takanori Oishi, Masaki Shimada, 2017

39 episodes of ‘CSI’ used to build AI’s natural language model

group of University of Edinburgh boffins have turned CSI:Crime Scene Investigation scripts into a natural language training dataset.Their aim is to improve how bots understand what’s said to them – natural language understanding.Drawing on 39 episodes from the first five seasons of the series, Lea Freeman, Shay Cohen and Mirella Lapata have broken the scripts up as inputs to a LSTM (long short-term memory) model.The boffins used the show because of its worst flaw: a rigid adherence to formulaic scripts that make it utterly predictable. Hence the name of their paper: “Whodunnit? Crime Drama as a Case for Natural Language Understanding”.“Each episode poses the same basic question (i.e., who committed the crime) and naturally provides the answer when the perpetrator is revealed”, the boffins write. In other words, identifying the perpetrator is a straightforward sequence labelling problem.What the researchers wanted was for their model to follow the kind of reasoning a viewer goes through in an episode: learn about the crime and the cast of characters, start to guess who the perp is (and see whether the model can outperform the humans).

Source: 39 episodes of ‘CSI’ used to build AI’s natural language model • The Register