FYI.

This story is over 5 years old.

Tech

How to Build an Emotional Internet Without Emoticons

One day soon we may finally know just how the Internet "feels." Not because of Google Trends or by counting the Twitters -- that’s kid’s stuff. The grown-ups are here, and they’ve brought some science to "deal" with all these “human emotions.” Consider...

One day soon we may finally know just how the Internet "feels." Not because of Google Trends or by counting the Twitters — that's kid's stuff. The grown-ups are here, and they've brought some science to "deal" with all these "human emotions." Consider this proposal:

As the web is becoming ubiquitous, interactive, and multimodal, technology needs to deal increasingly with human factors, including emotions… Emotion Markup Language…[is] suitable for use in three different areas: manual annotation of data; automatic recognition of emotion-related states from user behavior; and generation of emotion-related system behavior.

The idea of a markup language shouldn't be too unfamiliar. It's usually a set of special words that get put between angle brackets to tell computers what's going on. Humans can look at text and extract meaning from it, understand how it's organized from context, read it. Computers (at least for now and for the most part) require a consistent and standardized framework for everything, including feelings. Especially feelings.

A big part of the responsibility for putting those frameworks together for the Internet falls to the World Wide Web Consortium (W3C). The above is from the Emotion Markup Language (EmotionML) 1.0 W3C Working Draft. It's the first step towards a coherent and consistent means of annotating emotional states. For example,

. . That's a fairly simple one. Things get thorny as additional parameters get thrown in for intensity levels, periods of time, confidence levels, and all those other details we use daily to express to each other in our clumsy, ad hoc, human-but-not-machine understandable ways. But the authors of the draft realize the challenges, noting that "even scientists cannot agree on the number of relevant emotions, or on the names that should be given to them." If even scientists can't count and name all the emotions, what hope is there for a standard means of conveying them? And that's not the only problem. The document is punctuated by highlighted red boxes of text noting outstanding questions like "Does it make sense to state the intensity of an emotion but not its nature?" or "What do the default values of 'start' and 'end' mean for resources that do not have a notion of time?" It certainly sounds like more than just the scientists may have to be called in. Feeling kinda funny about this
At first this might all seem a bit silly. Haven't emoticons served us well enough thus far? Are people really going to include little embedded tags in their articles and blog posts designating the very specific emotional valence of each phrase or thought? Even if they did, what would the point be? Emoticons really are pretty useful, if a little too informal or embarrassing for most situations. Even if one doesn't tend to express oneself with repurposed punctuation marks, it's worth recognizing the convention does meet a real need. Text-based communication—email, instant messages, blogs, Twitter, most of the Web in general—certainly isn't as expressive as speech. The sentiment that goes along with a statement through intonation or facial expression is lost when it's written down. Tacking on "and that makes me feel happy" doesn't feel quite right. There are also those times one might wish to express an emotion without textual content. A whole emoticon vocabulary has evolved that allows those comfortable doing so to express, however roughly or kitchily, a feeling through a text-based medium without using words. The goal of an emotion markup language isn't really much different. But it rapidly gets much more difficult once the agenda becomes formal and the criteria for success includes the feasible possibility of wide, or even universal adoption. In the dry language of the draft specification: A static schema document can only fully validate a language where the valid element names and attribute values are known at the time when the schema is written. For EmotionML, this is not possible because of the fundamental requirement to give users the option of using their own vocabularies. All possible emotional states can't be anticipated in the specification, so people will have to be able to build upon it with their own contributions. But if that's the case, how will it be possible to determine if a particular emotionally marked-up (sounds like a state one wouldn't want to be in, doesn't it?) text is valid? There would be no complete set of rules to compare it against. There's something sort of endearingly naïve about the troubles this project runs up against. Just imagine the erstwhile working group around a table in a basement somewhere struggling with the logistics of creating a universal framework for expressing emotion so that computers can become more relatable. Haven't generations of academics in the humanities and human sciences struggled with just little pieces of this puzzle, and without a great deal in the way of linear progress? There are no doubt legions of romantic literary types out there all too eager to declare this a pointless exercise. But the goals seem so reasonable, simply to "make the concepts and descriptions developed in the affective sciences available for use in technological contexts." Admittedly, some of the examples given for potential applications of a well-designed EmotionML don't do all that much help to justify the project, such as "Opinion mining / sentiment analysis in Web 2.0, to automatically track customer's attitude regarding a product across blogs." But that's just the first one on a list and sort of feels like it's there in a misguided attempt to demonstrate commercial relevance. Other scenarios are more interesting, and make more sense:

  • Affective monitoring, such as ambient assisted living applications for the elderly, fear detection for surveillance purposes, or using wearable sensors to test customer satisfaction;
  • Character design and control for games and virtual worlds;
  • Social robots, such as guide robots engaging with visitors;
  • Expressive speech synthesis, generating synthetic speech with different emotions, such as happy or sad, friendly or apologetic;
  • Emotion recognition (e.g., for spotting angry customers in speech dialog systems);
  • Support for people with disabilities, such as educational programs for people with autism.
Any of these would require a common and technically suitable means to exchange information about emotions between people and machines — and just as importantly, within and between machines. There's a palpable sense here of techno-optimism running up against the nitty-gritty of human emotion when it comes to the details of execution. New technology gets stuck on drawing boards all the time for all sorts of reasons, but one can imagine that this is particularly frustrating for those working on such projects. It's just a bit of information to encapsulate and move. People convey emotions to each other countless times every day and we've developed a slew of disciplines that pick apart the mechanics of those communications. Can it be that hard to replicate them in a way computers can parse? The W3C isn't the fastest moving organization in the world, so it might be a while before we find out the answer to that question. In any case, it's good to know that somewhere people are working on this, if for no other reasons because it leads to sentences like "The following example describes various aspects of an emotionally competent robot," with the accompanying demonstration: robbie the robot example . And honestly, isn't that infinitely richer that a meager ":-)" ?