Transferring emotions over digital medium

Haven’t you noticed that transferring emotions over a mobile phone can be pretty difficult? How about explaining to a colleague via email that his work from the past week really isn’t at all what you were looking for? As you might already know most of the message we want to pass is not only contained in our words but rather in how we bring them: intonation, facial expressions, body position and so on (see Fig 1).

Fig1: The division of how we do communication: What we say (yellow), How we say it (red), What we do while we say it; body language (blue) (c) http://www.dragonbridgecorp.com

But how can we transfer that richness of information to the contact person, and how can he interpret that information in the same way as would be the case in a real life interaction? Students at the Changhua University in Taiwan did a research on understanding 3D facial expressions with children that have IDD (Intellectual and Developmental Disabilities). A small scale but intensive experiment was done with 3 children having ages 11, 8 and 8. These people have to live with a major interaction deficit, losing over half of the information that others do get. And that’s exactly what is going on with the digital communication nowadays.

Individuals with IDD have difficulties in recognizing people’s facial emotional expressions, which prevent them from normal social communication with other people. This project aims to evaluate the learning effect of using 3D-emotion system to help children with IDD in learning social emotions (see Fig 2). A computer program that shows avatars with strong facial emotional expressions and situations guides the children through the experiment.

Fig 2: A training session on the computer, showing an emotion and multiple options

The experiment happened in three steps. First a baseline was determined that will be used to detect improvements.  Next they had an extensive daily training with the 3D emotion system for a week. The final step was the follow-up where the children were evaluated by parents, teachers and family. The results of this experiment are shown in table 1.

Candidates

Baseline

Training

Follow-up

John

30.3%

83.3%

89%

Tiffany

24.5%

41.6%

50%

Jack

16%

61.6%

62.5%

Table 1: Shows the average percentage of correctly recognized emotions for three test subjects

If we compare the baseline of all the participants with the follow-up value, we can conclude that there is in general an improvement of 200% to 300% of recognizing a correct emotion. If a baseline experiment would be set equal to our current experience with digital communication and the follow-up experiment to an ideal environment, where we would be able to provide all of our non-verbal communication, the increase in information flow could be at least two to three times as high.

At this very moment people are waving at their screens, playing kinectimals on Xbox360 (see Fig 3). As far as the console is considered, there is a difference between you caressing your pet and you slapping your pet. The difference between the two is purely based on the speed of your hand. In a way you could say that the console is aware of very elementary emotions: You could call the user ‘caring’ when he is gentle for his pet, or you could call him ‘frustrated’ otherwise. Hence the computer knows very basically how the user is feeling.

Fig 3: Kinectimals is a game that is played on the Xbox360 with the Kinect camera

If we transport such feature to a shooter game, the scenario becomes different. In a way, an aggressive user is the target audience for such games. Most of those games are available in multiplayer versions, bringing other players worldwide into your living room. What if strong and big movements – signs of an angry person – are reflected in the game? The user could become stronger than the other ones, or get a different appearance. Try to do that with a mouse and the arrows on your keyboard!

Luckily people get quickly exhausted because of the extensive physical exercises, which makes 3D gaming the best way of getting away of all of those frustrations!

In conclusion, the 3D interface can help people experience a similar interaction as in real life without bothering other people and embarrassing yourself, it actually feels more natural. New to that 3D interaction game make the experience even more realistic while the game itself could be completely surreal. Being the king of the world was never so much within reach. As a consequence, the 3D interaction could indeed improve our social life.

Reference: http://www.sciencedirect.com/science/article/pii/S0891422210001496 by Yufang Cheng  and Shuhui Chen

Advertisements

4 Responses to Transferring emotions over digital medium

  1. I think games like these make a person not more social. Previously, they only board games and you had to sit together before the game to run. With new technologies, people find it more spectacular and play the game by themself. In computer games, some people become less social, I think this will have the same effect.
    Or is the goal that this will be extended to real life and people who can not speak, can communicate based on their waves.

    • At this moment, we live in a real computer era. There are probably millions of people playing games at this very moment, meaning that the context has changed. Comparing this post to the good old times where the family was happily sitting around the table isn’t really a fair comparison.

      We will enlighten the social interaction of computer games in general in our next blog post.

  2. bartminne says:

    If you would implement this technology to, for example, help establish long distant relations, the need for facial expression recognition would be needed. But since the technology is going to be used in a gaming environment, which is very controlled, I don’t think it is necessary. There are only a limited amount of actions possible in games which can easily controlled by hand movements without the necessity of facial expression recognition.

    Secondly, if you would use this technology to establish long distant relations I don’t think it would be a success since it would be nicer to look at the real camera images. In that case there is no issue regarding wrongfully interpreting facial expressions.

    • Please keep the context as we shaped them: playing games in controlled environment.

      Facial expressions will be recognised by future 3D camera’s (within a year or 2 you can have them in your living room [http://gizmodo.com/5862968/kinect-2-so-accurate-it-can-lip-read]), but being able to detect those is one aspect. The other – and probably the hardest – is to interpret the data coming from that image. We are born with some sort of intuition that makes us capable of recognising the slightest nuance on somebody’s face, but a computer cannot. The days that computers can actually learn all of those aspects, our lives will look very different than what we are used to now.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: