An assessment of ChatGPT's responses to frequently asked questions about cervical and breast cancer
Abstract Background Cervical cancer (CC) and breast cancer (BC) threaten women's well-being, influenced by health-related stigma and a lack of reliable information, which can cause late diagnosis and early death. ChatGPT is likely to become a key source of health information, although quality c...
Saved in:
Main Authors: | , , , , , , , , , |
---|---|
Format: | Book |
Published: |
BMC,
2024-09-01T00:00:00Z.
|
Subjects: | |
Online Access: | Connect to this object online. |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
MARC
LEADER | 00000 am a22000003u 4500 | ||
---|---|---|---|
001 | doaj_e18ebf912b1f4362b77d7ddde2d0eaa7 | ||
042 | |a dc | ||
100 | 1 | 0 | |a Zichen Ye |e author |
700 | 1 | 0 | |a Bo Zhang |e author |
700 | 1 | 0 | |a Kun Zhang |e author |
700 | 1 | 0 | |a María José González Méndez |e author |
700 | 1 | 0 | |a Huijiao Yan |e author |
700 | 1 | 0 | |a Tong Wu |e author |
700 | 1 | 0 | |a Yimin Qu |e author |
700 | 1 | 0 | |a Yu Jiang |e author |
700 | 1 | 0 | |a Peng Xue |e author |
700 | 1 | 0 | |a Youlin Qiao |e author |
245 | 0 | 0 | |a An assessment of ChatGPT's responses to frequently asked questions about cervical and breast cancer |
260 | |b BMC, |c 2024-09-01T00:00:00Z. | ||
500 | |a 10.1186/s12905-024-03320-8 | ||
500 | |a 1472-6874 | ||
520 | |a Abstract Background Cervical cancer (CC) and breast cancer (BC) threaten women's well-being, influenced by health-related stigma and a lack of reliable information, which can cause late diagnosis and early death. ChatGPT is likely to become a key source of health information, although quality concerns could also influence health-seeking behaviours. Methods This cross-sectional online survey compared ChatGPT's responses to five physicians specializing in mammography and five specializing in gynaecology. Twenty frequently asked questions about CC and BC were asked on 26th and 29th of April, 2023. A panel of seven experts assessed the accuracy, consistency, and relevance of ChatGPT's responses using a 7-point Likert scale. Responses were analyzed for readability, reliability, and efficiency. ChatGPT's responses were synthesized, and findings are presented as a radar chart. Results ChatGPT had an accuracy score of 7.0 (range: 6.6-7.0) for CC and BC questions, surpassing the highest-scoring physicians (P < 0.05). ChatGPT took an average of 13.6 s (range: 7.6-24.0) to answer each of the 20 questions presented. Readability was comparable to that of experts and physicians involved, but ChatGPT generated more extended responses compared to physicians. The consistency of repeated answers was 5.2 (range: 3.4-6.7). With different contexts combined, the overall ChatGPT relevance score was 6.5 (range: 4.8-7.0). Radar plot analysis indicated comparably good accuracy, efficiency, and to a certain extent, relevance. However, there were apparent inconsistencies, and the reliability and readability be considered inadequate. Conclusions ChatGPT shows promise as an initial source of information for CC and BC. ChatGPT is also highly functional and appears to be superior to physicians, and aligns with expert consensus, although there is room for improvement in readability, reliability, and consistency. Future efforts should focus on developing advanced ChatGPT models explicitly designed to improve medical practice and for those with concerns about symptoms. | ||
546 | |a EN | ||
690 | |a Artificial intelligence | ||
690 | |a ChatGPT | ||
690 | |a Cervical cancer | ||
690 | |a Breast cancer | ||
690 | |a Frequently asked question | ||
690 | |a Gynecology and obstetrics | ||
690 | |a RG1-991 | ||
690 | |a Public aspects of medicine | ||
690 | |a RA1-1270 | ||
655 | 7 | |a article |2 local | |
786 | 0 | |n BMC Women's Health, Vol 24, Iss 1, Pp 1-10 (2024) | |
787 | 0 | |n https://doi.org/10.1186/s12905-024-03320-8 | |
787 | 0 | |n https://doaj.org/toc/1472-6874 | |
856 | 4 | 1 | |u https://doaj.org/article/e18ebf912b1f4362b77d7ddde2d0eaa7 |z Connect to this object online. |