Conversational Health in the Age of Covid: How New AI Chatbots Are Improving Patient Outcomes

This year, the healthcare sector has found itself under a forensic microscope as the world comes to terms with a pandemic few predicted. Questions swirl about transmission and infection rates, healthcare systems, vaccines, immune health, lockdowns, the utility of face masks and much else related to the unfolding situation we find ourselves in. It’s a strange time to be alive.

A pandemic in numbers

At the time of writing, there have been 12.2 million confirmed cases of COVID-19, including 3.1 million in the United States. There have also been over 554,000 deaths attributed to the disease, which has hit particular countries with blunt force and raised questions over a host of legislative measures and government failings.

The subject will be endlessly debated for years to come, with one particular topic being the cost of locking down countries versus the risk of continuing as normal or favoring social distancing measures. Vaccines will represent another battlefront, as will the media narrative surrounding the vaccine, which likely played a part in reducing the number of patients who sought out hospital care by over 50%, according to data from hospital software company Strata Decision Technology.

To be clear, many excess deaths can be attributed to COVID-19 – but other deaths will have been caused by this decline in essential treatment, and by the negative consequences of quarantines.

How can we improve healthcare systems going forward?

Wellness coaches are naturally interested in these matters. They are especially keen to learn is how we can analyze our response to the crisis and improve healthcare systems going forward. This won’t be an easy task: the data that must be considered is vast, with each country’s reaction to the virus requiring deep and measured analysis.

Speaking of data, a new report from Hyro, a startup that develops enterprise-grade conversational AI tools, has detailed insights gleaned from novel virtual assistants that have been widely deployed across US healthcare systems in recent months.

These “COVID-19 virtual assistants” utilize conversational AI to take some of the strain off healthcare professionals and support centers that have been deluged during the greatest public crisis in a century.

The report is based on an analysis of a random sample of 2,000 AI-to-patient conversations, and I feel there is much that we can learn from it, specifically insofar
as patient engagement, information source preferences and prevention tactics are concerned.

What can we learn from conversational AI?

Conversational AI makes sense to me in many ways, because so many people already flock to sites like WebMD or even Google whenever they have a question about their health – almost one in three according to a 2019 survey in BMC Family Practice. In the Hyro report, only 20% of respondents who entered a dialogue with the virtual assistant requested to speak to someone, at which point the AI directed them to the relevant healthcare organization’s COVID-19 hotline.

Providing the AI is sufficiently advanced, I see no problem with utilizing it to improve patient outcomes. Of course, the healthcare provider should remain the main source of health-related knowledge and merely be augmented by dependable AI solutions. No-one is suggesting the enforced obsolescence of healthcare professionals.

In the aforementioned report, 56% of ‘Other COVID-19 Related Issues’ patients sought information about concerned testing – indicating the desperation of individuals to test themselves and their family members.

This comes as no surprise: if there’s a chance that you’re infected, you want to know about it to take the necessary measures. 28% of all comments related to tests also mentioned the term antibody testing, indicating a level of deeper awareness that doubtless came from watching news reports and undertaking independent online study.

A snapshot of queries quoted in the report include inquiries such as “Does my hospital administer the antibody test for COVID?” and “I’ve already tested negative for antibodies, does that mean that I don’t have it?”

Based on the conversation transcripts, there seemed to be a great deal of confusion regarding where and how patients could access test results, as well as frustration over delays in receiving them. This is definitely something we can learn from, since a feature of the so-called new normal may be regular testing not just for COVID but other emerging viruses. Addressing patients’ pain points now will make
for a better user experience in the future.

Sure, this insight could have been gleaned from interactions with real human doctors and nurses – but given that they have been swamped of late, it’s great to have such valuable feedback courtesy of AI.

Assuaging patients’ anxiety

One of the main advantages of AI-powered conversational AI is that it removes any sense of nervousness or reticence on the part of the patient. Oftentimes, there are things that a patient feels uncomfortable about discussing with a doctor (or a wellness coach!), whether because they are embarrassed or anxious. And failing to open up can lead to poor outcomes later on.

As far as patient anxieties related to coronavirus are concerned, the Hyro report found that 29% of FAQs related to the number of cases – likely patients enquiring about the number of infections at their hospital or in their town. Safety concerns constituted 21% of frequently asked questions, while 21% sought more information on symptoms. Just 7% concerned prevention and 6% treatment, although as
mentioned, queries about tests were recorded separately to FAQs.

Again, the report provides useful information that enables us to draw conclusions and plan strategies that will help improve healthcare systems in the future. It makes abundantly clear which areas of interest matter most to the populace. As the report points out, each of the aforementioned three terms – “number of COVID-19 cases,” “COVID-19 safety concerns,” and “COVID-19 symptoms” – spiked according to
Google Trends data during mid-to-late March.

If lessons are to be learned from the pandemic, reports such as the one about virtual assistants must be digested and reflected upon. We need to have serious conversations about ways in which healthcare systems and hospital policies can improve to better reflect patient priorities and concerns, and to reduce pain points when it comes to areas such as appointment scheduling, billing, information sharing
and telemedicine.

To say we need to make the best of a bad situation may sound flippant, but we will be condemned to repeat the mistakes of the past if we do not learn from them and make the necessary improvements. AI chatbots could have an important role to play in the fight to improve patient outcomes, now and well into the future.

11 thoughts on “Conversational Health in the Age of Covid: How New AI Chatbots Are Improving Patient Outcomes”

  1. Well, Biden isn’t a shoe-in as much as his supporters claim. It’s really feeling like, “Hillary in a landslide,” all over again.

    When I talk to people in confidence, there’s a lot of criticism of the democrat’s embraces of a nontheistic religion, they don’t want democrat tax policies, and they absolutely think Biden is losing his mind. Additionally, when it comes to Trump, most people I talk to dislike him personally, but agree with him on a variety of issues, particularly on two points: the threat of China and the effort to bring back tech and manufacturing jobs back to the United States.

    As for my comment, “COVID-1984,” it’s not to suggest the disease doesn’t exist, but that we’re getting some key elements from the book 1984 in the media and official responses to this virus… Those key elements are groupthink, newspeak, and Two Minutes Hate.

    If you understand those 3 concepts from the book, it’s not hard to find examples in the COVID-19 response… In fact all could be encapsulated in the colloquial name for the virus.

    First the news media was calling it the Wuhan virus… Then that was spun as racist… groupthink. Then it was demanded we call it COVID-19… groupthink and newspeak. Anyone who refused to ‘get on board’ with the new nomenclature are vilified by the same media without a shred of irony or introspection on why people of use colloquial, region based names for disease outbreaks… Two Minutes Hate.

  2. I agree that if I can’t tell I’m dealing with a bot, then I won’t care.
    Even if I knew it was a bot. My ego isn’t sensitive enough to trigger on that.

    (Well, assuming it was a normal transaction. If the local priest is replaced by a bot, THEN I’d be miffed.)

  3. I’ve encountered humans who couldn’t do better than a chatbot even completely without a script.

  4. Would it make a difference if it could be shown that a bot’s diagnosis was statistically more accurate than a human with 20 years experience.
    Shouldn’t better outcomes be the most important factor.

  5. I find chat bots highly annoying, too. Not because the general idea offends me, it’s because they’ve virtually always got holes in their ability to address the topic, that make it practically impossible to get to something the author didn’t think of, or worse, wanted to make difficult to address.

    Not that humans with scripts they’re not allowed to diverge from are much better in that regard.

  6. Personally, I would be angered or not take it seriously (depending on the severity of the problem) if someone put me to chat with a bot instead of listening to my health problems.

    A bot for me is a tool for retrieving information, not a way to ask for help.

Comments are closed.