CHI16 Review

About 540 papers were accepted to CHI 2016, out of which 69 received the Honorable Mention award, 20 received the Best paper award, and one received the best of all, “Golden Mouse” award.

To classify those 20 papers that received the best paper awards, it seems to me that, 25% of them are with regards to quality of life improvements, which is supported by the #chi4good theme chosen for CHI this year.

The Best 20 papers’ themes were as follows: women health, two design papers on visually impaired audience (see the videos), one on mobility impaired, teenager emotional/mental health (dear diary), user experience improvement (user burden, object oriented design, eudaemonic vs. hedonic user experience), Machine learning approach in HCI, a new view on HCI research (an image from the paper is attached below).

Screen Shot 2016-05-14 at 4.43.47 PM

Another outstanding theme in CHI 2016 was the use of Machine learning methods  (67 out 540 papers).  It seems like every year number of machine learning related papers in CHI increases.  For example, SpiroCall: Measuring Lung Function over a PhoneCall with the live demo presentation (see the talk video). This paper was quite fascinating because it showed how breathing sound can be in indicator of prediction for having healthy or unhealthy lungs.

The other ML related paper that stood out to me was Empath, by Michael Bernstein’s group (at Stanford) which involves use of deep learning word embedding algorithm GloVe (another word embeddings relatively similar to word2vec introduced by Socher) to create new lexical categories for analysis, similar to what LIWC does. LIWC is one of the major tools for psychologist to analyze social media posts, counting words in lexical categories like sadness, health, and positive emotion. However, LIWC is small: it has only 40 topical and emotional categories, many of which contain fewer than 100 words. Plus, it is not free and it took almost 8 years for the company to update LIWC. Further, many potentially useful categories like violence or social media don’t exist in current LIWC lexicons.

Empath, on the other hand, is a free tool that allows users to construct and validate new categories on demand using a few seed terms. It also covers a broad, pre-validated set of 200 emotional and topical categories. Empath has a python library  with handful of examples on how to use the tool.

My speculation is that over the next few years, we will see more and more of deep learning related papers in CHI as Tensorflow came out last year and a lot of startups are now focusing on creating apps that involve deep learning approaches because dealing with unstructured data is easy and we have a lot of it on social media. This year, to the best of my knowledge, two papers adopted deep learning tool in their papers.  “Social Media Image Analysis for Public Health” was the second paper in this realm. 

Last year, when I came across Emotion Mediated through Mid-Air Haptics, I speculated to see more papers on affective communication with temperature manipulation on this particular device or general thermal user experience study in affective communication. Surprisingly, I didn’t see any papers from the first author of Mid-Air haptic. But, there were two papers that involved thermal manipulation for affective communication: Hot Under the Collar: Mapping Thermal Feedback to Dimensional Models of Emotion (see the video) and The Effect of Thermal Stimuli on the Emotional Perception of Images (see the video).  I speculate to see motion and patterns of thermal stimuli on the body and different locations of the body in CHI 2017.

I realized that design of devices or wearables for couples or affective communication solely can be recognized as vanishing theme in CHI 2016. In CHI2013, The Whisper Pillow, A Study of Technology-Mediated Emotional Expression in Close Relationships and in CHI2014, Wrigglo: Shape-Changing Peripheral for Interpersonal Mobile Communication were two examples (see the video) of designs for affective communication. This year, to best of my knowledge, I didn’t see any papers solely targeting improving affective communication. Which maybe because there is not that many customers out there who are willing to spend money over an extra gadget on top of their smartphones for communicating affect. People have already stablished voice-0ver-IP methods equipped with audio, video, group chat, various animated and animated emoticons protocols to communicate affect with their loved ones. What an extra gadget can bring into the picture? Well, one place I can see that it could help is communicating negative affects. Most likely, when it comes to communicating negative affect, people are more leaned toward using technologies that are less rich communication mediums such as email and text messaging. Perhaps an extra gadget can facilitate communication of anger, frustration, sadness etc.

Overall, it seems to me people are more open and adaptive toward gadgets or wearables that are inclined toward improving or monitoring health (mental or physical) rather than pure affective communication. Fitbit and all smart watches out there are few examples that were marketed toward monitoring health devices. They were adopted drastically at the bringing and now they are less and less being used. So even if a wearable gadget perfect for communicating negative affect comes to  market, I feel it would be bring about more of momentary pleasure than a lasting meaning.

I can think of the paper SkinTrack: Using the Body as an Electrical Waveguide for Continuous Finger Tracking on the Skin (see this video) as an example of a device that can be used for affective communication only. For example, drawing a heart shape on your skin that would be picked up by the screen and and can be transmitted to a receiver over a text message.  This could be a novel replacement for (or even complement to)  GIF files that recently gained more popularity (refer to the CHI16 paper by Yahoo Labs “Fast, Cheap, and Good: Why Animated GIFs Engage Us“).  But again, ever lasting meaning or momentary pleasure (paper eudaemonic vs. hedonic user experience)? That is the main questions that needs to be addressed first before investing too much into these new gadgets.

Screen Shot 2016-05-14 at 6.25.59 PM

Other than the paper mentioned above, the rest of wearable gadgets were more or less geared toward addressing a mental disorder. For example EnhancedTouch: A Smart Bracelet for Enhancing Human- Human Physical Touch was geared toward autistic children.

Screen Shot 2016-05-15 at 12.32.56 AM

I got a chance to sit through a few courses during CHI16.  One of which was by Rafael Calvo on  “positive computing” which to me means maximizing the likelihood of “well-being”, given 6 strategies of “relativeness”, “competence”, “engagement”, “meaning”, “compassion”, and “autonomy” in designs (see video I and II).

During the second portion of the talk, it was mentioned that fire fighters in Australia demand  mental exercises apps for the purpose of emotion regulation of these individuals. Apparently the fire fighting departments in Australia are not that invested on mental heath development of the fire fighters. Instead the fully focus on physical health and well-being and if one of these fighters demonstrate mood swing etc, they are susceptible to lose their jobs (see parts of this discussion on video).  This talk made me think about how police officers, military, navy, or customer service individuals practice emotion regulation. Do they receive extensive trainings as for example astronaut do?  Perhaps not, but are routinely exposed to situations that elicit intense negative emotions. So, perhaps we don’t need a gadget that facilities communicating negative affect. Instead, we need gadgets that can help us regulate our emotions in a more effective way.

Another paper in this realm designed a few gadgets to help individuals with sever Borderline Personality disorder (people, specially women, who tend to see things black and white and have a hard time adapting to reality) regulate their emotions (see video of the talk). Marsha Lanehan, famous author of Skill Training Manual for Treating Borderline Personality Disorder(BPD) proposed four categories of Mindfulness, Emotion Regulation, Distress tolerance, and  Interpersonal Effectiveness to focus on, when it comes to treating BPD patients. She recently published the second version of this book  with the title of DBT skill training as she believes these four categories do not necessary only apply to borderline personality disorder. Instead they can be generalized to broader range of mental health problems, including anxiety disorder and depression. In the CHI16 paper, for each of these four categories, an object were designed and handed over to the patients to help them with emotion regulation. One of which was mindfulness sphere that the patients could hold in their hands and watch their hearth rate projected by LEDs inside the sphere (See the video min 6:38 or the image below). I found the paper very interesting because it is directly targeting the emotion regulation challenges involved with people, specially women, with Borderline personality disorder. It is also backing up my theory that tech gadgets are found useful when they are designed for health related purposes.

Screen Shot 2016-05-14 at 10.54.04 PM

Another favorite talk of mine was the paper ‘In the Wild’: Experiences of Engagement with Computerized CBT (video of the talk). This paper was focused on challenges with Computerized Cognitive Behavior Therapy (CCBT) which is focused on challenging the automatic thoughts and replacing them by a less black-and-white thinking thought. One thing that stood out to me was that “People would try anything to recover and that invokes a real danger of unethical tech design practice“. But, at the same time people have fragile confidence in relation to technology which perhaps may or may not be alleviated with more gamification and custom designs for each mental disorder.

I also enjoyed the simple straightforward talk “Delineating the Operational Envelope of Mobile and Conventional EDA Sensing on Key Body Locations” which indicated that half of the people who use the wristband EDA sensors, fail to see any signals and it is still unclear why. However, when the EDA sensors are plugged to fingertips, they tend to pick up stress signals with no problem. This talk is supporting the idea that Wearables is a bit of a misnomer, when it comes to picking up all physiological signals, in particular stress signal.

Lastly, it seems like general population is still not ready for Dynamic displays on clothing as stated by ““I don’t want to wear a screen”: Probing Perceptions of and Possibilities for Dynamic Displays on Clothing”.

Screen Shot 2016-05-14 at 6.35.01 PM

 

Posted in CHI, Human Computer Interaction.