AI Family Doctor

Introduction

This project focuses on addressing one of the prompt questions from the OpenAI grant program relating to the AI model behavior.

The objective of this project is to build trust between users and AI-assisted healthcare, particularly in the context of AI diagnosis tools. This project aims to investigate and develop an AI diagnosis tool that offers accurate, transparent, and personalized diagnoses, supported by robust evidence and expert consensus. thereby bridging the trust gap between users and AI systems.

My Role

UX Research
UX Design

Tool

Figma

Team

Dr. Setor Zilevu (Instructor)
Qingyu(Tracy) Li

Problem

How can we build trust between users and AI-assisted healthcare, through designing an AI diagnosis tool that provides accurate, transparent, and personalized diagnoses?

Goal

Business Goal
1. Foster a symbiotic relationship between healthcare providers and patients by leveraging our platform to enhance service efficiency and expand access to medical care.
2. Enable healthcare professionals to efficiently access and comprehend patients' medical histories, optimizing their time and effort.

User Goal
Enhance personal health management by offering a 24-hour service that reduces wait times, alleviates anxiety, and streamlines communication and diagnosis.

Solution

1. Build an empathetic and  trustworthy AI-powered healthcare platform to offer immediate assistance and guidance to users.
2. Establish network with local healthcare facilities to ensure seamless healthcare coordination and support.

How did we get to our final solution?
Research Scope

Initial Research Question


Under what conditions, if any, should AI assistants be allowed to provide medical, financial, or legal advice?
Secondary Research

Refining Research Scope


We refined the research focus to the healthcare industry with a thorough evaluation of the current landscape in these three areas, and the time and resource constraints. Research of multiple papers and reports highlights many problems within the current healthcare system. Healthcare service is a universal necessity. Yet in the US, the limited accessibility of healthcare facilities leads many individuals to self-diagnosis, either by searching for information online or using medical AI tools. However, self-diagnosis lacks accuracy and reliability. Therefore, AI has great potential and significance in the medical field.
Competitor Analysis

Understanding the Landscape


Our goal was to rapidly learn the essential industry information as we have limited experience in AI technology in healthcare. To achieve this, we conducted a competitor analysis of existing prominent AI-driven healthcare applications to examine the current landscape. Throughout the process, we enhanced our understanding of the standard practices and the prevailing trends within the industry and identified the standard user flows and workflows that set the benchmark in the industry.

Through out the process, we identified 4 main issues in the current landscape:
  1. Empathy and trust: the emotional bond and comfort provided by human interaction remain unmatched by AI, due to trust deficits.
  2. User experience issues: poor interaction design and non-personalized outputs in existing products alienate users. framework and liability of AI tool remains unclear.
  3. Technical limitations: current healthcare AI tools are not intelligent enough and raise data privacy concerns.
  4. Legal regulation Uncertainty: the legal framework and liability of AI-driven technology remains unclear.
Interview & Online Survey

Identifying Key Gaps


To understand the underlying causes, we conducted interviews with two doctors and four adults and sent out an online survey, receiving 28 responses. Key findings highlight that the trust deficit that is attributed to three main reasons.
  1. Transparency concerns stem from a lack of understanding of AI algorithms, leading to uncertainty about the derivation and sources of the result
  2. Result accuracy concerns arise due to unclear data origins and the lack of of result validation from authoritative institutions or professionals.
  3. Communication barriers in explaining symptoms and the lack of accurate physical assessments further detract from empathetic and personalized experiences.
In conclusion, we need to build trust between medical AI tools and users with access to accurate, transparent, personalized diagnoses. Meanwhile, the application and role of AI in healthcare should be primarily supportive and assistive. It cannot be a substitute for professional medical diagnosis and should be operated under human supervision.
Problem Definition

Refined Research Question


How can we build trust between users and AI-assisted healthcare through design an AI diagnosis tool that provides accurate, transparent, and personalized diagnoses?
How to build trust between medical AI-assistant and users?
Design Ideation

Feature Prioritization

Based on the insights and feedback we collected and analyzed from the research. We decided to focus on developing 5 main features, which are communication, accuracy, personalization, community, and liability. which would tackle the three main concerns of the users.

User Flow & Wireframe

Designing User Flow

Based on the research findings, we create a user flow and wireframe for our AI Family Doctor tool.

Product Details Coming Soon......

Success Metrics and Next Steps
Success Metrics

User Engagement Rate


1. Privacy: Percentage of users who willingly provide personal data for personalization
2. Return users & long-term relationship: Percentage of users who continue to use this product over an extended period
Success Metrics

Accuracy and Reliability


1. Accuracy: Percentage of accurate diagnoses compared to professional standard
2. Satisfaction Rate: User satisfaction with personalized recommendations and diagnoses
3. Communication: User feedback on the clarity of explanations provided by the AI

Next Steps

1. Conduct further user testing
2. Iterate the design based on user feedback
3. Contact local healthcare facilities and build connections with them

Reflection

1. This project makes me understand empathetic AI design by including the Human-in-the-Loop principle throughout the research and design process.

2. It also enhances my understanding of AI methodologies and algorithms and pushes me to critically about AI-driven tools.

3. I learn how to quickly research and learn the essential knowledge of an unfamiliar industry to approach and tackle the problem efficiently.

4. There is great potential for AI in the medical industry while its role should primarily be supportive and assistive.