1966: flowers, hippies and sunshine. But one thing not commonly associated with this year is the birth of ELIZA, or the first artificial conversational entity, more succinctly known today as a “Chatbot”. Created to mimic a Rogerian psychologist, who asks questions back to patients by rearranging what had been said previously, ELIZA was the first chatbot imitating natural human conversation (and a strong attempt to pass the famous Turing Test).

2016: 50 years later, Facebook introduced “Bots for the Messenger Platform”, allowing businesses to develop automated customer support, e-commerce and interactive experiences delivered through chatbots. Over the course of the past two years, chatbot technology has been employed for a variety of different services (e.g. commerce, education or entertainment among others). Considering their scalability, 24/7 availability and by engaging with users in a natural fashion through their most preferred communication channel, especially in the medical industry with its unique blend of limited economic, temporal and personnel resources, Chatbots offer new opportunities to relieve health care costs and improve patients’ self-management capabilities in their everyday life and with it overall treatment success and quality of life. Take for example Florence, a “personal nurse” providing timely reminders to take medicine and ensure adherence to the treatment schedule laid out by physicians. Likewise, other Chatbots such as GYANT or Your.MD increase patient health literacy, explaining symptoms and offering advice on healthy lifestyle behaviors to minimize re-occurrence of sickness.

The efficacy of “unidirectional” digital health interventions (e.g. simple SMS interventions) in general has been shown for various health-related topics, ranging from the promotion of medication adherence to the support of health behavioral change interventions. In “bi-directional” communication channels, when Chatbots take over the role of a communication partner, different characteristics of the Chatbot can be expected to affect the success of such interventions. Just as in medicine “the working alliance” between doctor and patient rests solidly on interpersonal communication style, so does the capacity to develop a working relationship between Chatbot and patient. Depending on the context, some small talk and a sprinkle of humor may be welcome additions to the service encounter for example when visiting your General Practitioner. However, in case of emergency, it is unlikely to be well received.

The questions and challenges that now arise concern how health service providers and developers can bring Chatbots to life that are well-received by patients and able to connect on a human-meaningful level. At first glance, Chatbots are low-cost, easy to scale, and easily implemented into existing messaging-based services (for example, Facebook or iMessenger), however, different contexts, in and out of medicine, will require careful, contextualized design considerations. Marketing and Human-Computer Interaction literature has long examined the concept of anthropomorphism, which is the ascription of human-like qualities to inanimate brands or products. Here this particularly comes alive when you consider that a Chatbot really is designed to interact with a human – like a human. Three key take-away design considerations are presented below for adapting Chatbots to become more than cold, calculation machines; but with personalities and idiosyncrasies of their own.

  1. The role of the Chatbot: Consumers have pre-existing perceptions of how service levels vary considering seniority of the staff providing care. Choosing an adequate role for a Chatbot (e.g. nurse vs. doctor vs. vs. companion vs. something completely different) can set the context and user expectations in a realistic sense – and ensure that they are met.
  2. The communication style of the Chatbot: Compare a rural village doctor’s office with an emergency room. In both cases doctors might take care of the patients, but the difference in context of their care will determine adequate differences in communication styles.
  3. Understanding limitations: Because a Chatbot cannot intervene physically if a patient fails to adhere to medical treatment, fail safes must be incorporated at all times to either try to encourage adherence, or report back to clinicians that human-intervention is needed.

References and further readings:

  • Kowatsch, T., Nißen, M., Shih, C. H. I., Rüegger, D., Volland, D., Filler, A., & Brogle, B. (2017), “Text-based Healthcare Chatbots Supporting Patient and Health Professional Teams: Preliminary Results of a Randomized Controlled Trial on Childhood Obesity”, in: Persuasive Embodied Agents for Behavior Change (PEACH2017), ETH Zurich.
  • Kowatsch, T., Nißen, M., Rüegger, D., Stieger, M., Flückiger, C., Allemand, M., & von Wangenheim, F. (2018, in press), “The Impact of Interpersonal Closeness Cues in Text-based Healthcare Chatbots on Attachment Bond and the Desire to Continue Interacting: An Experimental Design” in: Research-in-Progress Papers, Proceedings of the 26th European Conference on Information Systems (ECIS), June 23-28, 2018.

Further information about the authors: Marcia Nissen, Joseph Ollier and Florian von Wangenheim