当前位置: X-MOL 学术CA: Cancer J. Clin. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
The social media cancer misinformation conundrum
CA: A Cancer Journal for Clinicians ( IF 503.1 ) Pub Date : 2021-12-07 , DOI: 10.3322/caac.21710
Mike Fillon

Key Points

  • Researchers found that 30.5% of the social media articles they surveyed contained harmful information.
  • They noted that erroneous information spreads more widely than data-driven information from reliable sources.

When patients arrive at their clinical appointments wielding content from social media about their cancer diagnosis and treatment options, there is a nearly 1 in 3 chance that they will be armed with harmful misinformation according to a new study appearing in JNCI: Journal of the National Cancer Institute (published online July 22, 2021. doi:10.1093/jnci/djab141).

Although some misinformation is harmless or easily corrected, this is not always the case; the researchers found that in many instances, patients come with a fervent hope that a flawed and potentially harmful, even deadly, unproven therapy will cure their disease. The researchers noted that compounding the problem, erroneous information spreads more widely than data-driven information from reliable sources.

The study authors stressed that this growing trend threatens public health and must be corrected quickly because it hinders the delivery of evidence-based medicine while possibly jeopardizing crucial patient-physician relationships.

Study Details

The researchers used web-scraping software to search for relevant keywords in popular English-language articles about the 4 most common cancers: breast, prostate, colorectal, and lung. Their investigation encompassed news articles and blog postings appearing on multiple platforms, including Facebook, Reddit, Twitter, and Pinterest, between January 2018 and December 2019. They noted the uniform resource locators (URLs) that keep tabs on positive visits, such as “up-votes” on Twitter and Pinterest, “comments” on Reddit and Facebook, and “votes” and “shares” on Facebook.

They then selected the 50 articles about each cancer type with the greatest total engagement (all platforms combined)—a total of 200 distinctive articles—for further study. Because Facebook represented the most engagements, its content was also analyzed independently.

To rate the content, the researchers identified and selected 2 National Comprehensive Cancer Network guideline panel members for each of the 4 cancers under investigation to serve as content experts. The content experts rated each article's primary medical claims with a 5-point Likert scale ([1] true, [2] mostly true, [3] a mixture of both true and false, [4] mostly false, or [5] false). Scores from both experts who assessed each article were combined, with a sum of 6 or greater classified as “misinformation.” National Comprehensive Cancer Network panelists also classified the primary medical claims as (1) certainly not harmful, (2) probably not harmful, (3) uncertain, (4) probably harmful, or (5) certainly harmful. Articles rated as “probably harmful” or “certainly harmful” by either or both reviewers were classified as “harmful.” The raters also used a checklist to record their reasons for classifying articles as “misinformation” or “harmful.”

The total and Facebook-specific engagements for the presence or absence of misinformation and/or harm were assessed with a 2-sample Wilcoxon rank-sum test.

Study Results

Of the 200 articles investigated in depth, 37.5% (n = 75) were from traditional news sources, including online versions of print/broadcast media; 41.5% (n = 83) were from digital-only nontraditional news sources; 1% (n = 2) were from personal blogs; 3% (n = 6) were from crowdfunding sites; and 17% (n = 34) were from medical journals.

The median number of article engagements was 1900, with 96.7% of the visits occurring through Facebook engagement. The number of visits to articles with misinformation was greater than the number of visits to factual articles (median, 2300 [interquartile range, 1200-4700] vs 1600 [interquartile range, 819-4700]; P = .05.)

The content experts reported that 32.5% of the articles reviewed included misinformation (Cohen's κ coefficient = 0.63; 95% CI, 0.5-0.77). Their reasons included an article title that differed from the text, statistics and data that differed from the conclusion (28.8%), overstatement of the strength of the evidence presented with weak evidence described as strong or vice versa (27.7%), and claims involving unproven therapies (26.7%).

The researchers found that 30.5% of the articles featured harmful information (Cohen's κ coefficient = 0.66; 95% CI, 0.52-0.8) because of the potential for causing harmful inaction that could lead to a delay or refusal of medical attention for treatable/curable conditions (31%) and economic harm, such as out-of-pocket financial costs from treatment and/or travel (27.7%). They also identified harmful action from potentially toxic effects of the recommended test or treatment (17%) and harmful interactions due to unstated medical interactions with the curative therapies (16.2%).

The researchers found that more than 75% of the articles containing misinformation (76.9%; 50 of 65) included harmful information. They also reported that articles containing misinformation spawned a higher number of total engagements than articles with evidence-based data (median, 2300 vs 1600; P = .05).

Study Interpretation

The researchers admitted that a limitation of the study was their inclusion of only the most popular English-language cancer articles. They also said that the data lacked important qualitative information. “But this was determined to be beyond the scope of this report,” says lead study author, Skyler B. Johnson, MD, of the Department of Radiation Oncology at the University of Utah School of Medicine in Salt Lake City, Utah. “We believe this is the first study to quantify the amount of cancer misinformation on social media.”

The researchers also stated that reviewer bias toward conventional cancer treatments is possible. “However, questions were structured to avoid stigmatization of nontraditional cancer treatments,” they wrote.

Wen-Ying Sylvia Chou, PhD, MPH, program director of the Health Communication and Informatics Research Branch at the National Cancer Institute in Bethesda, Maryland, says that the study is valuable because it sheds light and raises alarms on an urgent problem. “The study tests whether cancer-related information is credible or trustworthy. Their data do point to some alarming trends.”

Dr. Chou, who has been studying health information on social media for the past 5 years, recently edited a special issue of the American Journal of Public Health on health information, in which she coauthored an editorial on research priorities in this area (2020;110:S273-S275. doi:10.2105/AJPH.2020.305905). She says that the clinician's approach to misinformation is critical. “The first question clinicians should ask their patients, so they can understand where they are coming from, is where are they getting this information, and what do they think of it? I think it is very important for clinicians to not pass judgement until they get a little more data.”

“We are in a very turbulent time when many people are fearful and mistrusting of traditional sources of health information, which is reflective of our polarized society,” she continues. She says that these factors—plus some resulting apathy—can contribute to misinformation “silos” online and possibly sow doubt and cause patients to be less trusting of their clinicians and mainstream health organizations. “This is very dangerous because the spread of misinformation has the potential to make society even more divisive.”

Dr. Chou says that she hopes either the current study authors or others will pursue the sources of false information and ascertain their motivations for sharing inaccurate information. “For example, it's possible that they're promoting useless or harmful products.”

Dr. Johnson says that he became interested in the topic as a medical student after searching the internet when his wife was diagnosed with cancer. He said he recognized that much of the information he found was bogus, and he immediately developed empathy for patients in dire need of good news and, hopefully, miracle cures. “We believe the takeaway message from this study is that misinformation online is common and something that patients with cancer will likely encounter. We need to work together to identify ways to help patients navigate the information that they will find online. Patients need to know how to identify harmful misinformation and where to go to get accurate information.”
image

Photo credit: Shutterstock/PRPicturesProduction



中文翻译:

社交媒体癌症错误信息难题

关键点

  • 研究人员发现,他们调查的社交媒体文章中有 30.5% 包含有害信息。
  • 他们指出,错误信息比来自可靠来源的数据驱动信息传播得更广泛。

根据 JNCI 上发表的一项新研究:当患者使用社交媒体上有关其癌症诊断和治疗选择的内容到达临床预约时,他们有近三分之一的机会掌握有害的错误信息:国家癌症杂志研究所(2021 年 7 月 22 日在线发布。doi:10.1093/jnci/djab141)。

尽管某些错误信息是无害的或容易纠正的,但情况并非总是如此;研究人员发现,在许多情况下,患者都热切希望一种有缺陷的、可能有害的、甚至是致命的、未经证实的疗法能够治愈他们的疾病。研究人员指出,使问题更加复杂的是,错误信息比来自可靠来源的数据驱动信息传播得更广泛。

研究作者强调,这种增长趋势威胁着公共健康,必须迅速纠正,因为它阻碍了循证医学的提供,同时可能危及关键的医患关系。

学习详情

研究人员使用网络抓取软件在流行的英语文章中搜索有关 4 种最常见癌症的相关关键词:乳腺癌、前列腺癌、结直肠癌和肺癌。他们的调查包括 2018 年 1 月至 2019 年 12 月期间出现在多个平台上的新闻文章和博客帖子,包括 Facebook、Reddit、Twitter 和 Pinterest。他们注意到统一资源定位器 (URL) 可以密切关注积极访问,例如“up - 在 Twitter 和 Pinterest 上投票”,在 Reddit 和 Facebook 上“评论”,在 Facebook 上“投票”和“分享”。

然后,他们选择了 50 篇关于每种癌症类型的总参与度最高的文章(所有平台相结合)——总共 200 篇不同的文章——进行进一步研究。由于 Facebook 的参与度最高,因此其内容也被独立分析。

为了对内容进行评分,研究人员为正在调查的 4 种癌症中的每一种确定并选择了 2 名国家综合癌症网络指南小组成员作为内容专家。内容专家使用 5 点李克特量表对每篇文章的主要医疗声明进行评分([1] 正确,[2] 大部分正确,[3] 真假混合,[4] 大部分错误,或 [5] 错误)。将评估每篇文章的两位专家的分数相结合,总和为 6 或更高被归类为“错误信息”。国家综合癌症网络小组成员还将主要医疗声明分类为(1)肯定无害,(2)可能无害,(3)不确定,(4)可能有害,或(5)肯定有害。被一位或两位评论者评为“可能有害”或“肯定有害”的文章被归类为“有害”。

使用 2 样本 Wilcoxon 秩和检验评估是否存在错误信息和/或伤害的总参与度和特定于 Facebook 的参与度。

研究结果

在深入调查的 200 篇文章中,37.5%(n = 75)来自传统新闻来源,包括印刷/广播媒体的在线版本;41.5% (n = 83) 来自纯数字非传统新闻来源;1% (n = 2) 来自个人博客;3%(n = 6)来自众筹网站;17% (n = 34) 来自医学期刊。

文章参与的中位数为 1900,其中 96.7% 的访问是通过 Facebook 参与发生的。对带有错误信息的文章的访问次数大于对事实文章的访问次数(中位数,2300 [四分位距,1200-4700] vs 1600 [四分位距,819-4700];P = .05。)

内容专家报告说,32.5% 的审查文章包含错误信息(Cohen 的 κ 系数 = 0.63;95% CI,0.5-0.77)。他们的原因包括文章标题与正文不同、统计数据和数据与结论不同(28.8%)、夸大了证据的强度,而将弱证据描述为强或反之亦然(27.7%),以及声称涉及未经证实的疗法(26.7%)。

研究人员发现,30.5% 的文章包含有害信息(Cohen 的 κ 系数 = 0.66;95% CI,0.52-0.8),因为可能导致有害的不作为,从而导致延迟或拒绝对可治疗/可治愈的医疗护理条件(31%)和经济损害,例如治疗和/或旅行的自付费用(27.7%)。他们还确定了推荐测试或治疗的潜在毒性作用(17%)的有害作用,以及与治愈性疗法的未说明的医学相互作用(16.2%)造成的有害相互作用。

研究人员发现,超过 75% 的文章包含错误信息(76.9%;65 篇中的 50 篇)包含有害信息。他们还报告说,包含错误信息的文章比具有基于证据的数据的文章产生了更高的总参与次数(中位数,2300 对 1600;P = .05)。

学习解释

研究人员承认,这项研究的一个局限性是他们只包含了最流行的英语癌症文章。他们还表示,这些数据缺乏重要的定性信息。“但这已确定超出了本报告的范围,”位于犹他州盐湖城的犹他大学医学院放射肿瘤学系的主要研究作者、医学博士 Skyler B. Johnson 说。“我们相信这是第一项量化社交媒体上癌症错误信息数量的研究。”

研究人员还表示,审稿人对传统癌症治疗的偏见是可能的。“然而,问题的结构是为了避免对非传统癌症治疗的污名化,”他们写道。

马里兰州贝塞斯达国家癌症研究所健康传播和信息学研究部项目主任文英 Sylvia Chou 博士、公共卫生硕士表示,这项研究很有价值,因为它揭示了一个紧迫问题并发出警报。“这项研究测试了与癌症相关的信息是否可信或值得信赖。他们的数据确实指出了一些令人担忧的趋势。”

过去5年一直在社交媒体上研究健康信息的周博士最近编辑了美国公共卫生杂志关于健康信息的特刊,她在其中与人合着了一篇关于该领域研究重点的社论(2020; 110:S273-S275.doi:10.2105/AJPH.2020.305905)。她说,临床医生处理错误信息的方法至关重要。“临床医生应该问他们的病人的第一个问题,这样他们才能了解他们来自哪里,他们从哪里得到这些信息,以及他们对此有何看法?我认为临床医生在获得更多数据之前不要做出判断是非常重要的。”

“我们正处于一个非常动荡的时期,许多人对传统的健康信息来源感到恐惧和不信任,这反映了我们两极分化的社会,”她继续说道。她说,这些因素——加上一些由此产生的冷漠——可能会导致在线错误信息“孤岛”,并可能播下怀疑并导致患者对他们的临床医生和主流卫生组织的信任度降低。“这是非常危险的,因为错误信息的传播有可能使社会更加分裂。”

周博士说,她希望当前的研究作者或其他人能够追查虚假信息的来源,并确定他们分享不准确信息的动机。“例如,他们可能在推销无用或有害的产品。”

约翰逊博士说,当他的妻子被诊断出患有癌症时,他在互联网上搜索后对这个话题产生了兴趣。他说他认识到他发现的大部分信息都是虚假的,他立即对急需好消息的病人产生了同理心,并希望能得到奇迹般的治疗。“我们相信这项研究的主要信息是在线错误信息很常见,癌症患者可能会遇到这种情况。我们需要共同努力,找出帮助患者浏览他们将在网上找到的信息的方法。患者需要知道如何识别有害的错误信息以及去哪里获得准确的信息。”
图片

图片来源:Shutterstock/PRPicturesProduction

更新日期:2022-02-10
down
wechat
bug