Upon arriving in Milan the team members met for the first time and chose a population to design for. Given an extremely short timeframe and limited access to populations for research, we chose to explore a problem space that we had firsthand experience with.
Two members of our team had emotionally resonant experiences of receiving care in a hospital and being part of a care team for a family member. Both strongly identified with the challenges of communicating emotional needs during this time.
People receiving care often struggle to connect with their families and health care professionals. How can we make sure they have the emotional and medical support they need?
Early research in Human-Computer Interaction gave us a glimpse into how a human brain wired for emotional connection can relate to even a minimalist technical interface.
I won’t tell my blender my secrets, but some of the first computer interfaces were incredibly successful at mobilizing users to engage in emotional sharing.
We've long known that the powerful human need for connection can be met in some ways by non-human companions from pets to houseplants. The development of a computer therapist suggested that even digital interactions would meet our needs for emotional connection and sharing.
Academics continue to investigate the mechanisms of applications for emotional sharing via computer interactions. Researchers at USC have developed a modern-day ELIZA named Ellie that works with veterans to help diagnose mental illness upon returning from deployment.
Ellie has been especially valuable in this context in diagnosing depression and PTSD using AI to analyze language in ways more precise than human diagnosis. Additionally, Ellie can increase emotional sharing. Speaking to a real person (either via an interface or in person) can cause people to be anxious about judgements being made by the psychologist. However, users shared significantly more during interactions with Ellie than they shared with a human therapist.
Our concept was an interface (Ember) that could use open-ended prompts to initiate emotional sharing for users in a care context. Their thoughts and feelings can be shared with the family care team.
Ember would also use AI to analyze the voice to provide mental health and physical health data. Before we could answer the question of how we could communicate this data, we needed to understand how it would flow through the different user types.
Interviews with people who had experience in long-term care (as either a caregiver or recipient) revealed low levels of concern about data privacy or disclosure to the care team. People felt confident that this information in the hands of family members or medical professionals would only improve care. They expressed near universal comfort with sharing content of speech and health data derived from interactions with Ember.
Having a rough idea of how Ember would function we set out to prototype the form that it might take.
We wanted to a form that would:
We reached out to leading researchers and practitioners at IxDA20 to better understand the ways that people connect emotionally to interfaces. Hadar Maximov and Carla Diana both influenced the trajectory of our concept .
Their work articulating the emotional dimensions of our relationships to robots led us to consider what elements make our digital companions beloved family members or pesky annoyances. In particular we were captivated with Hadar’s exploration of the ways that people connect with robots, even seemingly unemotional robots like a military robot for disarming landmines or a lawn mowing or pool cleaning robot.
How can we design Ember such that interactions spark an emotional connection?
With the goal of optimizing for emotional sharing we created simple paper prototypes to test reactions to different types of interfaces. We mocked up a human, several pets (dog, turtle, cat), Clippy (Microsoft’s infamous digital assistant), a robot, and created a short interaction script.
We also asked people about the digital characters in their lives. Who was fun? Or frustrating? Or effective? Or comforting?
Users responded most positively to Ember in the form of a dog and many testers also volunteered that they liked the idea of the dog as a companion for a family member. Our rendering of Clippy evoked visceral negative reactions and sparked conversation about how characters can go wrong.
While preferences of our research participants influenced our choice in creating Ember to resemble a pet dog, the feedback about why certain types of interactions were fun or frustrating was more about identifying themes common to positive interactions. Microsoft's Clippy could have looked like a dog and been just as annoying as a talking paper clip.
Our take-aways from our research were that coming on strong with a 'personality' might make interacting with Ember less pleasant. Taking after ELIZA and Roomba, our interface would have a minimalist personality.
We also wanted to let users connect with Ember when the time is right and not see Ember popping up like a notification demanding attention.
And finally, we wanted to let users care for Ember in some way. We wanted the user to read subtle cues from Ember as an invitation to connect and see a positive impact of their interaction with her.
In the final moments of the charette as we were putting finishing touches on our presentation, we decided to pivot to a different version of Ember.
Although there was a strong stated preference for Ember as a pet, we worried that she felt somewhat childish. Would a person receiving care feel condescended to if asked by their physician to interact with a cartoon dog?
We decided to reimagine Ember the fox as Vera the houseplant. The design principles that we established for Ember (minimalist personality, user-driven connection, and an element of providing care) were a natural fit for a houseplant.