Portrayal of Cancer Patients in the Era of AI: A Content Analysis of Images Produced by Generative AI Tools.
3/5 보강
TL;DR
RT for MFW is a highly effective treatment option, resulting in very high local tumor regression rates, and reduces the numerous negative QoL-affecting consequences for the patients, which often present in a palliative state.
OpenAlex 토픽 ·
Media Influence and Health
Empathy and Medical Education
Patient-Provider Communication in Healthcare
RT for MFW is a highly effective treatment option, resulting in very high local tumor regression rates, and reduces the numerous negative QoL-affecting consequences for the patients, which often prese
APA
Wen‐Ying Sylvia Chou, Anna Gaysynsky, et al. (2026). Portrayal of Cancer Patients in the Era of AI: A Content Analysis of Images Produced by Generative AI Tools.. Health communication, 41(5), 888-898. https://doi.org/10.1080/10410236.2025.2537807
MLA
Wen‐Ying Sylvia Chou, et al.. "Portrayal of Cancer Patients in the Era of AI: A Content Analysis of Images Produced by Generative AI Tools.." Health communication, vol. 41, no. 5, 2026, pp. 888-898.
PMID
40776397 ↗
Abstract 한글 요약
This study sought to characterize images of cancer patients generated by Artificial Intelligence (AI) text-to-image tools, and assess whether images differed by cancer type or AI tool, to elucidate the potential implications of using AI-generated images in health communication. Two generative AI-based tools, and , were prompted to produce images of a "cancer patient," "breast cancer patient," "lung cancer patient," and "prostate cancer patient". Images ( = 320) were coded for perceived demographics, illness features, affect, cancer symbols, setting, and photorealism. Analysis revealed that AI tools commonly depicted cancer patients as White (83.2%) and middle-aged or older (87.5%). Compared to general cancer patient images, breast cancer patients were portrayed as younger, while prostate and lung cancer patients were depicted as older. Breast cancer patients were also more frequently depicted as healthy and displaying positive affect, while lung cancer patients were more often depicted as ill and showing negative affect. Differences were also found between the AI tools, with images featuring more racial diversity and being less photorealistic compared to images produced by . Because generative AI tools may produce images of cancer patients that are limited on some dimensions of diversity, and in some cases may reinforce stereotypes (eg, breast cancer patients as healthy and happy, lung cancer patients as ill and hopeless), it is critical to consider biases that may exist in these models - and the potential societal implications of using AI-generated images of cancer patients - before these tools are deployed in cancer communication efforts.
🏷️ 키워드 / MeSH 📖 같은 키워드 OA만
📖 전문 본문 읽기 PMC JATS · ~36 KB · 영문
Introduction
Introduction
Researchers are increasingly recognizing the potential for Artificial Intelligence (AI) technologies such as large language models to have a transformative effect on health communication (Dunn et al., 2023). However, less attention has been paid in the field of health communication to the possible impact of AI-based image generators. Generative AI tools that convert text descriptions into images are being used to generate millions of images daily (Bianchi et al., 2023). These text-to-image tools require little technical expertise to operate but can quickly generate impressively detailed, realistic, relevant, and novel images based on text prompts (Ali et al., 2024). Trained on vast datasets of images paired with captions scraped from the internet (Sonmez et al., 2024), these tools use machine learning methods to extract key information from text prompts (eg, the relationship between objects) and then generate an image based on that information (Bird et al., 2023).
There are many potential applications of these tools in health communication efforts (Alenichev et al., 2023). AI image generators can be used to create promotional materials for health organizations, design patient-facing educational materials (Ali et al., 2024), produce medical illustrations to support medical education (Kumar et al., 2024), and make interventions more engaging (Sezgin & McKay, 2024). However, to date, robust health communication research evaluating the use and impact of generative AI tools have largely focused on text-based, large language models and associated tools (eg, chatbots, virtual health assistants) (S. Chen et al., 2023; Li et al., 2024; Vilaro et al., 2022) while little attention has been paid to text-to-image tools (Buzzaccarini et al., 2024). The expanding availability and uptake of these tools warrants systematic evaluation of the outputs they produce by health communication scholars.
One concern with AI text-to-image tools is that generated images may perpetuate or even amplify existing biases, stereotypes, and disparities, for example if the training data used in model development are skewed (Ali et al., 2024; Y. Chen et al., 2024; Sun et al., 2024). Prior research has identified gender and racial stereotypes in AI-generated images (Y. Chen et al., 2024; Fraser et al., 2023a). For instance, studies have found that receptionists are more likely to be portrayed as female while most engineers are depicted as male (Cho et al., 2023), and certain attributes (eg, “poor”) are more likely to be associated with darker skin tones (Fraser et al., 2023b). Generated images related to health have similarly been found to lack diversity: one study found that AI models depict surgeons as White and male in the vast majority of instances, significantly under-representing female and non-White surgeons relative to real-world data (Ali et al., 2024). Another study found that AI images generated for “dementia” overrepresented light-skinned individuals and featured visual tropes that could reinforce harmful disease stereotypes (Putland et al., 2023).
Few studies to date have examined AI-generated images in the context of cancer, an important area of research for several reasons. First, the importance of visuals in health communication messages, including their potential to influence outcomes such as attention, comprehension, recall, and behavior is increasingly being recognized by scholars (Gatting et al., 2023; King, 2015). In fact, many guidelines for the development of health education materials recommend the use of images in patient and public-facing health information resources as a best practice (Gatting et al., 2023).
Second, AI-generated images are increasingly being deployed in industries that shape cultural norms (eg, entertainment media and marketing). It is therefore important to consider how visual portrayals of cancer may impact cultural narratives about the disease. Cultivation theory in health communication posits that with repeated exposure, media narratives can shape worldviews, norms, and perceptions of reality (Romer et al., 2014). The theory is concerned not with the immediate effects of exposure to a message, but rather the long-term consequences of cumulative exposure to messages and images that may both reflect and shape the ways people think about the world – for example, beliefs about crime as a result of exposure to television narratives about violence (Morgan et al., 2014). As AI images become more ubiquitous, exposure to the narratives and potential biases they contain may similarly influence attitudes toward, and perceptions of, subjects like cancer. The way cancer is depicted in popular media and its implications for social perceptions of the condition has been a long-standing area of interest for health communication researchers (Champion et al., 2016). This research has revealed common themes that suggest a persistent cultural framing of the illness. For example, magazine advertisements related to breast cancer are found to overwhelmingly promote hope and positive experiences (AbiGhannam et al., 2018) and media representations of breast cancer patients tend to portray them as optimistic “fighters” (Champion et al., 2016). Understanding the narratives communicated by visual portrayals of cancer is important, given their potential to impact not only how others view and treat those with a cancer diagnosis but patients’ own experiences and understanding of their condition. For example, in one study, young cancer survivors reported finding that entertainment narratives often depict cancer experiences unrealistically (eg, underestimating the time spent receiving chemotherapy; showcasing individuals wearing makeup and not experiencing treatment-induced nausea), contributing to emotional distress and internalized stigma (Reffner Collins et al., 2024).
Third, prior research into visual portrayals of cancer in popular media has identified pronounced biases, which could be further reproduced by AI tools. For example, an examination of images in consumer cancer magazines found that the images featured primarily younger (47% under 40), female (61%), White (77%), and healthy-looking (76%) people (Phillips et al., 2011). An analysis of breast cancer images in women’s and fashion magazines similarly found that they tended to depict mostly White (81%), young (81%), and attractive (99%) women with positive facial expressions (88%) and healthy-looking body types (94%) (McWhirter et al., 2012). Finally, a study of breast cancer-related content on Pinterest found that a majority of pins containing an image of a person depicted White (84%) and female (97%) adults (Miller et al., 2019). It is important to assess whether these trends are replicated in AI-generated images, or if AI-generated images introduce unique biases into cancer patient portrayals.
Although AI text-to-image tools could be used for a variety of cancer-related communication purposes (eg, creation of patient education materials, development of stimuli for studies), their utility would be severely limited if they consistently produce images that lack diversity or portray cancer patients in stigmatizing or inauthentic ways. Furthermore, the use of problematic generated images in communications more broadly (eg, news content) could have larger societal implications, as exposure to stereotypical images about a certain group can reinforce negative stereotypes about that group among others and also have an adverse impact on the self-perception of those in the affected group (Bianchi et al., 2023; Jean et al., 2022; Kay et al., 2015; McClure et al., 2011).
In the current study, we examined how two leading AI image generators (DALL-E and Stable Diffusion) depict individuals with cancer to assess the characteristics of these outputs and obtain insight into the utility of these generated images for real-world applications. We generated images of patients with cancer (in general) as well as patients with three common cancers (breast, prostate, and lung) to assess whether there are differences in the way specific cancer patients are portrayed. This in-depth analysis will help inform our understanding of how cancer patients are portrayed by popular AI tools and highlight critical considerations for the use of AI tools in cancer research and practice.
Researchers are increasingly recognizing the potential for Artificial Intelligence (AI) technologies such as large language models to have a transformative effect on health communication (Dunn et al., 2023). However, less attention has been paid in the field of health communication to the possible impact of AI-based image generators. Generative AI tools that convert text descriptions into images are being used to generate millions of images daily (Bianchi et al., 2023). These text-to-image tools require little technical expertise to operate but can quickly generate impressively detailed, realistic, relevant, and novel images based on text prompts (Ali et al., 2024). Trained on vast datasets of images paired with captions scraped from the internet (Sonmez et al., 2024), these tools use machine learning methods to extract key information from text prompts (eg, the relationship between objects) and then generate an image based on that information (Bird et al., 2023).
There are many potential applications of these tools in health communication efforts (Alenichev et al., 2023). AI image generators can be used to create promotional materials for health organizations, design patient-facing educational materials (Ali et al., 2024), produce medical illustrations to support medical education (Kumar et al., 2024), and make interventions more engaging (Sezgin & McKay, 2024). However, to date, robust health communication research evaluating the use and impact of generative AI tools have largely focused on text-based, large language models and associated tools (eg, chatbots, virtual health assistants) (S. Chen et al., 2023; Li et al., 2024; Vilaro et al., 2022) while little attention has been paid to text-to-image tools (Buzzaccarini et al., 2024). The expanding availability and uptake of these tools warrants systematic evaluation of the outputs they produce by health communication scholars.
One concern with AI text-to-image tools is that generated images may perpetuate or even amplify existing biases, stereotypes, and disparities, for example if the training data used in model development are skewed (Ali et al., 2024; Y. Chen et al., 2024; Sun et al., 2024). Prior research has identified gender and racial stereotypes in AI-generated images (Y. Chen et al., 2024; Fraser et al., 2023a). For instance, studies have found that receptionists are more likely to be portrayed as female while most engineers are depicted as male (Cho et al., 2023), and certain attributes (eg, “poor”) are more likely to be associated with darker skin tones (Fraser et al., 2023b). Generated images related to health have similarly been found to lack diversity: one study found that AI models depict surgeons as White and male in the vast majority of instances, significantly under-representing female and non-White surgeons relative to real-world data (Ali et al., 2024). Another study found that AI images generated for “dementia” overrepresented light-skinned individuals and featured visual tropes that could reinforce harmful disease stereotypes (Putland et al., 2023).
Few studies to date have examined AI-generated images in the context of cancer, an important area of research for several reasons. First, the importance of visuals in health communication messages, including their potential to influence outcomes such as attention, comprehension, recall, and behavior is increasingly being recognized by scholars (Gatting et al., 2023; King, 2015). In fact, many guidelines for the development of health education materials recommend the use of images in patient and public-facing health information resources as a best practice (Gatting et al., 2023).
Second, AI-generated images are increasingly being deployed in industries that shape cultural norms (eg, entertainment media and marketing). It is therefore important to consider how visual portrayals of cancer may impact cultural narratives about the disease. Cultivation theory in health communication posits that with repeated exposure, media narratives can shape worldviews, norms, and perceptions of reality (Romer et al., 2014). The theory is concerned not with the immediate effects of exposure to a message, but rather the long-term consequences of cumulative exposure to messages and images that may both reflect and shape the ways people think about the world – for example, beliefs about crime as a result of exposure to television narratives about violence (Morgan et al., 2014). As AI images become more ubiquitous, exposure to the narratives and potential biases they contain may similarly influence attitudes toward, and perceptions of, subjects like cancer. The way cancer is depicted in popular media and its implications for social perceptions of the condition has been a long-standing area of interest for health communication researchers (Champion et al., 2016). This research has revealed common themes that suggest a persistent cultural framing of the illness. For example, magazine advertisements related to breast cancer are found to overwhelmingly promote hope and positive experiences (AbiGhannam et al., 2018) and media representations of breast cancer patients tend to portray them as optimistic “fighters” (Champion et al., 2016). Understanding the narratives communicated by visual portrayals of cancer is important, given their potential to impact not only how others view and treat those with a cancer diagnosis but patients’ own experiences and understanding of their condition. For example, in one study, young cancer survivors reported finding that entertainment narratives often depict cancer experiences unrealistically (eg, underestimating the time spent receiving chemotherapy; showcasing individuals wearing makeup and not experiencing treatment-induced nausea), contributing to emotional distress and internalized stigma (Reffner Collins et al., 2024).
Third, prior research into visual portrayals of cancer in popular media has identified pronounced biases, which could be further reproduced by AI tools. For example, an examination of images in consumer cancer magazines found that the images featured primarily younger (47% under 40), female (61%), White (77%), and healthy-looking (76%) people (Phillips et al., 2011). An analysis of breast cancer images in women’s and fashion magazines similarly found that they tended to depict mostly White (81%), young (81%), and attractive (99%) women with positive facial expressions (88%) and healthy-looking body types (94%) (McWhirter et al., 2012). Finally, a study of breast cancer-related content on Pinterest found that a majority of pins containing an image of a person depicted White (84%) and female (97%) adults (Miller et al., 2019). It is important to assess whether these trends are replicated in AI-generated images, or if AI-generated images introduce unique biases into cancer patient portrayals.
Although AI text-to-image tools could be used for a variety of cancer-related communication purposes (eg, creation of patient education materials, development of stimuli for studies), their utility would be severely limited if they consistently produce images that lack diversity or portray cancer patients in stigmatizing or inauthentic ways. Furthermore, the use of problematic generated images in communications more broadly (eg, news content) could have larger societal implications, as exposure to stereotypical images about a certain group can reinforce negative stereotypes about that group among others and also have an adverse impact on the self-perception of those in the affected group (Bianchi et al., 2023; Jean et al., 2022; Kay et al., 2015; McClure et al., 2011).
In the current study, we examined how two leading AI image generators (DALL-E and Stable Diffusion) depict individuals with cancer to assess the characteristics of these outputs and obtain insight into the utility of these generated images for real-world applications. We generated images of patients with cancer (in general) as well as patients with three common cancers (breast, prostate, and lung) to assess whether there are differences in the way specific cancer patients are portrayed. This in-depth analysis will help inform our understanding of how cancer patients are portrayed by popular AI tools and highlight critical considerations for the use of AI tools in cancer research and practice.
Materials and methods
Materials and methods
Sample generation
We used Stable Diffusion and DALL-E 3 to generate 40 images per tool for each of the following prompts: “[a photograph of a] cancer patient,” “[a photograph of a] breast cancer patient,” “[a photograph of a] prostate cancer patient,” and “[a photograph of a] lung cancer patient,” for a total of 320 images. This sampling strategy is in line with previously published studies of AI images (Fraser et al., 2023a, 2023b; Putland et al., 2023). The final prompts were determined based on multiple rounds of testing and assessment of outputs on both platforms. The phrase “a photograph of” was added to prompts for DALL-E 3 to obtain images that were more comparable to those generated with Stable Diffusion’s default style setting. Additionally, DALL-E 3 was queried through the ChatGPT interface, and explicit instructions not to alter the prompt (“do not make any modifications or additions to the prompt”) were added to prevent the tool from editing the prompt to ensure consistency in the generation process. After the initial prompt refinement process, we followed procedures similar to those that have been used in previous content analysis studies of AI images where a standardized prompt was used to generate a large set of images. We intentionally avoided additional prompt engineering to best ascertain the extent of heterogeneity in outputs. The images were generated over a two-week period (March 21, 2024–April 2, 2024).
Image coding
Features of images were identified and categorized (hereafter referred to as “coded”) by pairs of coders using a codebook iteratively developed by the research team (the final version of the codebook in its entirety is available in Appendix A). Specific codes were informed by prior research on visual representations of cancer patients (Grant & Hundley, 2009; McWhirter et al., 2012; Phillips et al., 2011), other studies of AI-generated images (Fraser et al., 2023b; Putland et al., 2023), and key image features identified in the pilot round of image generation. Codes included demographic characteristics of individuals depicted in the images (eg, perceived race, age, gender), affect, overall health appearance and indicators of illness, setting, presence of cancer symbols (eg, cancer ribbons, the color pink), and level of photorealism (ie, to what extent the image appears to be a photograph of a real person). The codebook was further refined through an initial coding of 20% of images. All images were then double coded using a Qualtrics survey form built from the final codebook. Images were randomly assigned to pairs of coders who were blinded to the prompt and the AI tool that was used to generate the image. Given that much of this coding entailed subjective judgments, efforts were made to minimize intercoder variability, including by having all coders participate in extensive training and pilot coding debriefings prior to coding the final dataset. During these meetings, coders received additional guidance on how to consistently apply and operationalize the various coding categories. For example, it was clarified that medical scrubs and white lab coats (in addition to hospital gowns) count as “medical clothing,” that rendering errors should not be taken into account when assessing photorealism (coding for that variable should only focus on the style of the image), and that when coding for affect, decisions should be based on the overall facial expression of the individual, and not just the presence or absence of a smile. Overall, agreement between coders was relatively high in the final dataset, with percent agreement ranging from 62% (for health appearance) to 99% (for cancer ribbon). A third coder adjudicated disagreements between the initial two coders for the final dataset.
Analysis
Out of the 320 images initially generated, 17 images were removed from the sample because they did not contain a person or did not clearly show their facial features, yielding a final analytic sample of 303 images. For analysis, several codes, including health appearance, affect, and photorealism were converted from 5- to 3-point scales (eg, combining “slightly negative” and “negative” as well as “slightly positive” and “positive” affect), and two response categories in the “setting” code were collapsed.
Summary statistics (frequencies and percentages) were calculated separately by prompt and AI tool. Chi-Square tests (and Fisher’s Exact tests when any cell included had expected frequencies ≤5) (Kim, 2017) were then used to determine if there were significant differences in characteristics between 1) the cancer site-specific prompts compared to the general cancer patient prompt; and 2) the two AI tools. Qualitative insights derived from coders’ observations are also included to highlight additional salient features and trends that emerged during coding.
Sample generation
We used Stable Diffusion and DALL-E 3 to generate 40 images per tool for each of the following prompts: “[a photograph of a] cancer patient,” “[a photograph of a] breast cancer patient,” “[a photograph of a] prostate cancer patient,” and “[a photograph of a] lung cancer patient,” for a total of 320 images. This sampling strategy is in line with previously published studies of AI images (Fraser et al., 2023a, 2023b; Putland et al., 2023). The final prompts were determined based on multiple rounds of testing and assessment of outputs on both platforms. The phrase “a photograph of” was added to prompts for DALL-E 3 to obtain images that were more comparable to those generated with Stable Diffusion’s default style setting. Additionally, DALL-E 3 was queried through the ChatGPT interface, and explicit instructions not to alter the prompt (“do not make any modifications or additions to the prompt”) were added to prevent the tool from editing the prompt to ensure consistency in the generation process. After the initial prompt refinement process, we followed procedures similar to those that have been used in previous content analysis studies of AI images where a standardized prompt was used to generate a large set of images. We intentionally avoided additional prompt engineering to best ascertain the extent of heterogeneity in outputs. The images were generated over a two-week period (March 21, 2024–April 2, 2024).
Image coding
Features of images were identified and categorized (hereafter referred to as “coded”) by pairs of coders using a codebook iteratively developed by the research team (the final version of the codebook in its entirety is available in Appendix A). Specific codes were informed by prior research on visual representations of cancer patients (Grant & Hundley, 2009; McWhirter et al., 2012; Phillips et al., 2011), other studies of AI-generated images (Fraser et al., 2023b; Putland et al., 2023), and key image features identified in the pilot round of image generation. Codes included demographic characteristics of individuals depicted in the images (eg, perceived race, age, gender), affect, overall health appearance and indicators of illness, setting, presence of cancer symbols (eg, cancer ribbons, the color pink), and level of photorealism (ie, to what extent the image appears to be a photograph of a real person). The codebook was further refined through an initial coding of 20% of images. All images were then double coded using a Qualtrics survey form built from the final codebook. Images were randomly assigned to pairs of coders who were blinded to the prompt and the AI tool that was used to generate the image. Given that much of this coding entailed subjective judgments, efforts were made to minimize intercoder variability, including by having all coders participate in extensive training and pilot coding debriefings prior to coding the final dataset. During these meetings, coders received additional guidance on how to consistently apply and operationalize the various coding categories. For example, it was clarified that medical scrubs and white lab coats (in addition to hospital gowns) count as “medical clothing,” that rendering errors should not be taken into account when assessing photorealism (coding for that variable should only focus on the style of the image), and that when coding for affect, decisions should be based on the overall facial expression of the individual, and not just the presence or absence of a smile. Overall, agreement between coders was relatively high in the final dataset, with percent agreement ranging from 62% (for health appearance) to 99% (for cancer ribbon). A third coder adjudicated disagreements between the initial two coders for the final dataset.
Analysis
Out of the 320 images initially generated, 17 images were removed from the sample because they did not contain a person or did not clearly show their facial features, yielding a final analytic sample of 303 images. For analysis, several codes, including health appearance, affect, and photorealism were converted from 5- to 3-point scales (eg, combining “slightly negative” and “negative” as well as “slightly positive” and “positive” affect), and two response categories in the “setting” code were collapsed.
Summary statistics (frequencies and percentages) were calculated separately by prompt and AI tool. Chi-Square tests (and Fisher’s Exact tests when any cell included had expected frequencies ≤5) (Kim, 2017) were then used to determine if there were significant differences in characteristics between 1) the cancer site-specific prompts compared to the general cancer patient prompt; and 2) the two AI tools. Qualitative insights derived from coders’ observations are also included to highlight additional salient features and trends that emerged during coding.
Results
Results
Across prompts and tools, generated images tended to portray cancer patients as White and middle-aged or older (Figure 1). Overall, cancer patients were infrequently portrayed as very ill. Although some characteristics were similar across prompts (eg, higher representation of White individuals), significant differences between the general “cancer patient” prompt and each of the cancer site-specific prompts were also noted (Table 1).
Demographics
Most cancer patients (hereafter CPs) portrayed in AI-generated images were White (79.2–91.3% across prompts; 83.2% overall). Notably, most images not coded as “White” were coded as “unclear,” indicating that the race/ethnicity of the individual portrayed was ambiguous. Age representation varied across prompts, with breast CPs portrayed as younger than general CPs (only 3.8% of breast CP images portrayed older adults compared to 27.5% of general CP images), and individuals in both prostate and lung CP images being portrayed as older more frequently than those in general CP images (with 73.6% and 87.7% coded as older adults, respectively). In terms of gender, individuals in general CP images (82.5%) and breast CP images (92.3%) were mostly perceived as feminine, while both lung CP and prostate CP images had higher rates of masculine seeming individuals (100% for prostate cancer and 78.1% for lung cancer).
Affect
There were differences in affect between breast CP images and the general CP images: none of the breast CP images portrayed individuals with negative affect (vs. 15.0% of general CP images), and a majority conveyed positive affect (67.9% vs. 31.3% for general CP images). Conversely, lung CP images featured much higher rates of negative affect (69.9%) compared to general CP images. Over half of the prostate CP images (52.8%) and general CP images (53.8%) depicted individuals with a neutral affect.
Illness features
There were also notable differences across prompts on health appearance. No breast CP images contained individuals who looked clearly ill, and 73.1% of images featured individuals that looked healthy; whereas in the general CP images, 21.3% of individuals were coded as looking ill and the same percentage were coded as looking clearly healthy. Prostate CP images also had a higher percentage of healthy-looking individuals (44.4%), and a lower percentage of unhealthy-looking individuals (2.8%), compared to the general CP images. In contrast, individuals in lung CP images were portrayed as ill at a much higher rate compared to general CP images (56.2% vs 21.3%). Additionally, 28.8% of general CP images portrayed the subject lying or sitting in bed, which was a less frequent feature in images generated with the site-specific prompts (0% for breast CP; 8.3% for prostate CP; and 12.3% for lung CP). Furthermore, 62.5% of CP images and 46.2% of breast CP images portrayed an individual with a head covering. In contrast, no prostate CP images and only 8.2% of lung CP images showed head coverings.
Cancer symbols
The color pink was more prevalent in the breast CP images (82.1%) compared to the general CP images (51.3%). In contrast, only approximately 4% of the prostate CP and lung CP images contained the color pink. Additionally, although the inclusion of a cancer ribbon was not common overall (13.2%), breast CP images more frequently contained a ribbon compared to the general CP images (39.7% vs 10.0%), while prostate and lung CP images less frequently contained a ribbon (1.4% and 0%, respectively).
Setting features
Overall, about half of the images (51.5%) had no discernable background (eg, a solid-colored background). Fewer breast CP images featured a medical setting (1.3%) compared to general CP images (22.5%). Overall, medical equipment was present in 19.5% of images and medical staff were seen in 7.3% of images. Compared to general CP images (26.3%), both breast CP images and prostate CP images depicted medical equipment less frequently (6.4% and 11.1%, respectively), and fewer breast CP images included medical staff (0% vs. 10.0% for general CP images).
Photorealism
The majority of images were photorealistic (79.5%), but breast CP images were more frequently rated as photorealistic than general CP images (92.3% vs. 80.0%), whereas prostate and lung CP images were rated as photorealistic less frequently.
Across prompts and tools, generated images tended to portray cancer patients as White and middle-aged or older (Figure 1). Overall, cancer patients were infrequently portrayed as very ill. Although some characteristics were similar across prompts (eg, higher representation of White individuals), significant differences between the general “cancer patient” prompt and each of the cancer site-specific prompts were also noted (Table 1).
Demographics
Most cancer patients (hereafter CPs) portrayed in AI-generated images were White (79.2–91.3% across prompts; 83.2% overall). Notably, most images not coded as “White” were coded as “unclear,” indicating that the race/ethnicity of the individual portrayed was ambiguous. Age representation varied across prompts, with breast CPs portrayed as younger than general CPs (only 3.8% of breast CP images portrayed older adults compared to 27.5% of general CP images), and individuals in both prostate and lung CP images being portrayed as older more frequently than those in general CP images (with 73.6% and 87.7% coded as older adults, respectively). In terms of gender, individuals in general CP images (82.5%) and breast CP images (92.3%) were mostly perceived as feminine, while both lung CP and prostate CP images had higher rates of masculine seeming individuals (100% for prostate cancer and 78.1% for lung cancer).
Affect
There were differences in affect between breast CP images and the general CP images: none of the breast CP images portrayed individuals with negative affect (vs. 15.0% of general CP images), and a majority conveyed positive affect (67.9% vs. 31.3% for general CP images). Conversely, lung CP images featured much higher rates of negative affect (69.9%) compared to general CP images. Over half of the prostate CP images (52.8%) and general CP images (53.8%) depicted individuals with a neutral affect.
Illness features
There were also notable differences across prompts on health appearance. No breast CP images contained individuals who looked clearly ill, and 73.1% of images featured individuals that looked healthy; whereas in the general CP images, 21.3% of individuals were coded as looking ill and the same percentage were coded as looking clearly healthy. Prostate CP images also had a higher percentage of healthy-looking individuals (44.4%), and a lower percentage of unhealthy-looking individuals (2.8%), compared to the general CP images. In contrast, individuals in lung CP images were portrayed as ill at a much higher rate compared to general CP images (56.2% vs 21.3%). Additionally, 28.8% of general CP images portrayed the subject lying or sitting in bed, which was a less frequent feature in images generated with the site-specific prompts (0% for breast CP; 8.3% for prostate CP; and 12.3% for lung CP). Furthermore, 62.5% of CP images and 46.2% of breast CP images portrayed an individual with a head covering. In contrast, no prostate CP images and only 8.2% of lung CP images showed head coverings.
Cancer symbols
The color pink was more prevalent in the breast CP images (82.1%) compared to the general CP images (51.3%). In contrast, only approximately 4% of the prostate CP and lung CP images contained the color pink. Additionally, although the inclusion of a cancer ribbon was not common overall (13.2%), breast CP images more frequently contained a ribbon compared to the general CP images (39.7% vs 10.0%), while prostate and lung CP images less frequently contained a ribbon (1.4% and 0%, respectively).
Setting features
Overall, about half of the images (51.5%) had no discernable background (eg, a solid-colored background). Fewer breast CP images featured a medical setting (1.3%) compared to general CP images (22.5%). Overall, medical equipment was present in 19.5% of images and medical staff were seen in 7.3% of images. Compared to general CP images (26.3%), both breast CP images and prostate CP images depicted medical equipment less frequently (6.4% and 11.1%, respectively), and fewer breast CP images included medical staff (0% vs. 10.0% for general CP images).
Photorealism
The majority of images were photorealistic (79.5%), but breast CP images were more frequently rated as photorealistic than general CP images (92.3% vs. 80.0%), whereas prostate and lung CP images were rated as photorealistic less frequently.
Differences between AI text-to-image tools
Differences between AI text-to-image tools
There were clear differences in the images generated by DALL-E and Stable Diffusion (Table 2). For example, although most DALL-E images featured White individuals (77.6%), there was more racial diversity in these images than in Stable Diffusion images (88.1% of which were coded as White and 0% coded as clearly non-White). There were also differences in age representation: a greater percentage of DALL-E images portrayed younger adults (24.5%) compared to Stable Diffusion images (1.3%). Additionally, Stable Diffusion images more frequently showed subjects in bed (22.5% vs 1.4% in DALL-E images), and more frequently portrayed CPs with head coverings (47.5% vs. 11.2% in DALL-E images). Finally, there was a pronounced difference in terms of photorealism, as 100% of Stable Diffusion images were coded as photorealistic, compared to 56.6% of DALL-E images.
There were clear differences in the images generated by DALL-E and Stable Diffusion (Table 2). For example, although most DALL-E images featured White individuals (77.6%), there was more racial diversity in these images than in Stable Diffusion images (88.1% of which were coded as White and 0% coded as clearly non-White). There were also differences in age representation: a greater percentage of DALL-E images portrayed younger adults (24.5%) compared to Stable Diffusion images (1.3%). Additionally, Stable Diffusion images more frequently showed subjects in bed (22.5% vs 1.4% in DALL-E images), and more frequently portrayed CPs with head coverings (47.5% vs. 11.2% in DALL-E images). Finally, there was a pronounced difference in terms of photorealism, as 100% of Stable Diffusion images were coded as photorealistic, compared to 56.6% of DALL-E images.
Qualitative observations
Qualitative observations
In addition to the features summarized above, other salient themes emerged during coding. First, many images (especially of general CPs and breast CPs) featured feminine individuals who aligned with societal beauty ideals, such as being thin and wearing make-up. Several images also featured nudity, where the individual’s breast(s) were visible (see Figure 1, #13). Additionally, some of the lung CP images depicted individuals who appeared to be smoking or holding cigarettes (Figure 1, #28). Images also sometimes combined visual markers signifying CPs (eg, head coverings) and visual markers associated with healthcare providers, such as white coats, medical scrubs, and stethoscopes, that patients would not realistically wear (Figure 1, #11). Many images – particularly those generated by DALL-E – also included pronounced anatomical elements such as internal organs (eg, Figure 1, #23 and #34), highlighting their biomedical emphasis. Finally, while the level of photorealism was relatively high, obvious rendering errors that made the images’ generated nature apparent (eg, too many fingers, anatomically impossible hand placement) were also frequently observed.
In addition to the features summarized above, other salient themes emerged during coding. First, many images (especially of general CPs and breast CPs) featured feminine individuals who aligned with societal beauty ideals, such as being thin and wearing make-up. Several images also featured nudity, where the individual’s breast(s) were visible (see Figure 1, #13). Additionally, some of the lung CP images depicted individuals who appeared to be smoking or holding cigarettes (Figure 1, #28). Images also sometimes combined visual markers signifying CPs (eg, head coverings) and visual markers associated with healthcare providers, such as white coats, medical scrubs, and stethoscopes, that patients would not realistically wear (Figure 1, #11). Many images – particularly those generated by DALL-E – also included pronounced anatomical elements such as internal organs (eg, Figure 1, #23 and #34), highlighting their biomedical emphasis. Finally, while the level of photorealism was relatively high, obvious rendering errors that made the images’ generated nature apparent (eg, too many fingers, anatomically impossible hand placement) were also frequently observed.
Discussion
Discussion
This analysis sought to characterize images of cancer patients produced by AI text-to-image tools. The first notable finding was the lack of racial diversity in generated images, with most depicting White individuals, and few clearly depicting persons of color. Age and gender representation were also skewed for certain prompts, and while these imbalances may be logical in some cases (eg, higher representation of individuals who are middle-aged or older may reflect the fact that cancer risk increases with age, the lack of feminine individuals in prostate cancer images corresponds with biological risk), in other cases, these distributions do not reflect reality. For example, only 16.4% of lung CPs were coded as feminine, whereas the real-world gender disparity in lung cancer incidence is not nearly as large (Fu et al., 2023; Sharma, 2022). Similar discrepancies with risk statistics have been previously observed with cancer images published in magazines (Phillips et al., 2011), indicating a replication of bias. Lack of demographic representation, particularly regarding race/ethnicity, is problematic because the “erasure” of certain groups can reinforce disparities and contribute to certain groups being overlooked in cancer control efforts. It is important to ensure that AI-generated images equitably and accurately represent people across demographic groups (Fraser et al., 2023a). Limited demographic representation in AI-generated images may also have implications for health communication efforts that incorporate these images. Social identity theory (which posits that people define their sense of self in terms of social categories and group memberships) suggests that individuals are more likely to attend to a message if they identify with the individuals portrayed in the images accompanying the message, and research suggests this “identification” is often based on characteristics such as race and gender (Phillips et al., 2011).
Beyond demographics, important patterns were also observed in other image features. Unlike general CP images, most images depicted breast CPs as healthy (73.1%) and with positive affect (67.9%), whereas lung CPs were more often portrayed as ill (56.2%) and with negative affect (69.9%). These findings show how AI tools may reinforce both “aspirational” cancer experiences as well as negative stereotypes about a cancer diagnosis. Studies have shown that public attitudes toward lung cancer are more negative than attitudes toward breast cancer, and that lung cancer is more frequently associated with despair than breast cancer (Sriram et al., 2015). Whether AI images reinforce or challenge these beliefs is important to assess because the ways in which cancer patients are portrayed can influence both how people experience cancer (eg, sense of identity, expectations) and how people with cancer are treated by others (Putland et al., 2023).
Although lung cancer survival rates are lower than survival rates for breast cancer, consistently portraying lung CPs as gravely ill and hopeless could help reinforce the common view that the disease is always fatal, which could contribute to therapeutic nihilism and negatively impact health seeking behaviors and treatment decisions among individuals who have, or are at risk for, lung cancer (Sriram et al., 2015; Tran et al., 2015). On the other hand, although portraying breast CPs as healthy and happy could foster hope, it may also have negative effects, such as reducing the perceived seriousness of the disease and the urgency of taking preventive action (eg, screening) (McWhirter et al., 2012) or even alienating patients whose real-life experience with breast cancer is not as positive (Bock, 2013; Reffner Collins et al., 2024). These optimistic portrayals of breast CPs may reflect the widespread dissemination of images associated with “pink ribbon culture” promoted by breast cancer advocacy groups, companies, and the mass media, and which has been critiqued for “sugarcoating” the disease, emphasizing a forced cheerfulness (McDonnell et al., 2017), and centering individualism, feminine ideals, and an imperative for optimism (Gibson et al., 2014). Although it is not necessarily problematic for any single image to portray a healthy breast CP or an unhappy lung CP, the consistency of these depictions in the images generated may reinforce certain narratives. Ideally, images of CPs should portray the full spectrum of cancer experiences.
Additionally, the frequency of cancer ribbons and the color pink in breast CP images reflect commonly used visual markers of breast cancer (AbiGhannam et al., 2018). The pink ribbon has become an instantly recognizable symbol for breast cancer, and while it may have positive connotations of strength and hope, not everyone may perceive this symbol positively (Harvey & Strahilevitz, 2009). Additionally, the use of pink ribbons may be a way to signify cancer in visual materials without having to use “objectionable” or negative images of cancer, which makes it more palatable for use in communication efforts, including cause-related marketing campaigns (Harvey & Strahilevitz, 2009). As pink is commonly associated with femininity in Western culture, frequent use of pink may reinforce the idea of breast cancer as a “women’s disease” (Wagner, 2005).
Lastly, a substantial number of lung CP images included allusions to smoking, which could help further feed into the prevailing narrative that lung cancer is a self-inflicted disease caused by a person’s smoking (Tran et al., 2015), and increase societal stigma associated with lung cancer. It is possible that fear-based tobacco control campaigns have contributed to a prevalent visual association between lung cancer and cigarette smoking, as well as negative portrayals of lung cancer patients, and these associations are being reflected in AI-generated images.
The potential for AI-generated images to increase or reinforce stigma needs to be further evaluated and monitored because stigma can negatively impact lung cancer patients’ health and wellbeing, leading them to feel guilt or shame, experience psychological distress, and avoid or delay seeking medical care (Mazières et al., 2015; Tran et al., 2015). Additionally, exposure to negative representations of a group can reinforce stereotypes and encourage negative attitudes toward members of that group (Harwood, 2020). Work on social identity theory suggests a link between media exposure and intergroup outcomes through the media’s role in creating commonly held norms and conventions: by promoting representations that emphasize particular aspects of a group (eg, many lung cancer patients smoke) while ignoring others, media images and messages can play a role in creating shared norms and activating the use of these constructs in subsequent evaluations (Mastro, 2003; McKinley et al., 2014).
The potential for generative AI systems to reproduce and reinforce biases and stereotypes is concerning because these perspectives, hidden under the guise of technological neutrality, can influence social perceptions of reality (Gorska & Jemielniak, 2023), especially if disseminated on a large scale (Ali et al., 2024; Bianchi et al., 2023). Although modifying the prompts used could help mitigate some of the observed patterns (eg, specifying images of non-White patients to increase racial diversity), prompt engineering is generally quite limited as a solution to the complex and embedded biases in these tools (Bird et al., 2023). More systematic mitigation approaches, such as involving stakeholders in the design of these systems, using higher-quality training data, and enacting guidelines to increase transparency and accountability, are likely needed to help reduce the risk of harm from these tools (Bird et al., 2023). Furthermore, promoting greater AI literacy among communication practitioners, researchers, health care providers, and the general public may also help offset the potential harms of biased AI-generated images. Overly optimistic beliefs in the potential and objectivity of AI systems could cause individuals to perceive AI tools as more impartial or neutral than humans (Helberger et al., 2020; Klingbeil et al., 2024), and consequently be less wary of the potential for bias in the products generated by these systems. In addition, a better understanding of AI tools could help users develop more accurate perceptions of their capabilities, recognize biases in these tools, and make more appropriate decisions about what to do with the outputs they produce (Pinski & Benlian, 2024). Beyond didactic education to promote AI literacy, hands-on training may offer a tangible way to help learners understand how AI tools work. For example, a mobile app (“AiLingo”) that enables users to iteratively configure an image classification model by selecting different training data and assessing changes in the model’s performance was found to increase both subjective and objective AI literacy (Pinski et al., 2024). Finally, training in prompt engineering may help users of AI systems mitigate biases in the absence of more comprehensive solutions.
This analysis sought to characterize images of cancer patients produced by AI text-to-image tools. The first notable finding was the lack of racial diversity in generated images, with most depicting White individuals, and few clearly depicting persons of color. Age and gender representation were also skewed for certain prompts, and while these imbalances may be logical in some cases (eg, higher representation of individuals who are middle-aged or older may reflect the fact that cancer risk increases with age, the lack of feminine individuals in prostate cancer images corresponds with biological risk), in other cases, these distributions do not reflect reality. For example, only 16.4% of lung CPs were coded as feminine, whereas the real-world gender disparity in lung cancer incidence is not nearly as large (Fu et al., 2023; Sharma, 2022). Similar discrepancies with risk statistics have been previously observed with cancer images published in magazines (Phillips et al., 2011), indicating a replication of bias. Lack of demographic representation, particularly regarding race/ethnicity, is problematic because the “erasure” of certain groups can reinforce disparities and contribute to certain groups being overlooked in cancer control efforts. It is important to ensure that AI-generated images equitably and accurately represent people across demographic groups (Fraser et al., 2023a). Limited demographic representation in AI-generated images may also have implications for health communication efforts that incorporate these images. Social identity theory (which posits that people define their sense of self in terms of social categories and group memberships) suggests that individuals are more likely to attend to a message if they identify with the individuals portrayed in the images accompanying the message, and research suggests this “identification” is often based on characteristics such as race and gender (Phillips et al., 2011).
Beyond demographics, important patterns were also observed in other image features. Unlike general CP images, most images depicted breast CPs as healthy (73.1%) and with positive affect (67.9%), whereas lung CPs were more often portrayed as ill (56.2%) and with negative affect (69.9%). These findings show how AI tools may reinforce both “aspirational” cancer experiences as well as negative stereotypes about a cancer diagnosis. Studies have shown that public attitudes toward lung cancer are more negative than attitudes toward breast cancer, and that lung cancer is more frequently associated with despair than breast cancer (Sriram et al., 2015). Whether AI images reinforce or challenge these beliefs is important to assess because the ways in which cancer patients are portrayed can influence both how people experience cancer (eg, sense of identity, expectations) and how people with cancer are treated by others (Putland et al., 2023).
Although lung cancer survival rates are lower than survival rates for breast cancer, consistently portraying lung CPs as gravely ill and hopeless could help reinforce the common view that the disease is always fatal, which could contribute to therapeutic nihilism and negatively impact health seeking behaviors and treatment decisions among individuals who have, or are at risk for, lung cancer (Sriram et al., 2015; Tran et al., 2015). On the other hand, although portraying breast CPs as healthy and happy could foster hope, it may also have negative effects, such as reducing the perceived seriousness of the disease and the urgency of taking preventive action (eg, screening) (McWhirter et al., 2012) or even alienating patients whose real-life experience with breast cancer is not as positive (Bock, 2013; Reffner Collins et al., 2024). These optimistic portrayals of breast CPs may reflect the widespread dissemination of images associated with “pink ribbon culture” promoted by breast cancer advocacy groups, companies, and the mass media, and which has been critiqued for “sugarcoating” the disease, emphasizing a forced cheerfulness (McDonnell et al., 2017), and centering individualism, feminine ideals, and an imperative for optimism (Gibson et al., 2014). Although it is not necessarily problematic for any single image to portray a healthy breast CP or an unhappy lung CP, the consistency of these depictions in the images generated may reinforce certain narratives. Ideally, images of CPs should portray the full spectrum of cancer experiences.
Additionally, the frequency of cancer ribbons and the color pink in breast CP images reflect commonly used visual markers of breast cancer (AbiGhannam et al., 2018). The pink ribbon has become an instantly recognizable symbol for breast cancer, and while it may have positive connotations of strength and hope, not everyone may perceive this symbol positively (Harvey & Strahilevitz, 2009). Additionally, the use of pink ribbons may be a way to signify cancer in visual materials without having to use “objectionable” or negative images of cancer, which makes it more palatable for use in communication efforts, including cause-related marketing campaigns (Harvey & Strahilevitz, 2009). As pink is commonly associated with femininity in Western culture, frequent use of pink may reinforce the idea of breast cancer as a “women’s disease” (Wagner, 2005).
Lastly, a substantial number of lung CP images included allusions to smoking, which could help further feed into the prevailing narrative that lung cancer is a self-inflicted disease caused by a person’s smoking (Tran et al., 2015), and increase societal stigma associated with lung cancer. It is possible that fear-based tobacco control campaigns have contributed to a prevalent visual association between lung cancer and cigarette smoking, as well as negative portrayals of lung cancer patients, and these associations are being reflected in AI-generated images.
The potential for AI-generated images to increase or reinforce stigma needs to be further evaluated and monitored because stigma can negatively impact lung cancer patients’ health and wellbeing, leading them to feel guilt or shame, experience psychological distress, and avoid or delay seeking medical care (Mazières et al., 2015; Tran et al., 2015). Additionally, exposure to negative representations of a group can reinforce stereotypes and encourage negative attitudes toward members of that group (Harwood, 2020). Work on social identity theory suggests a link between media exposure and intergroup outcomes through the media’s role in creating commonly held norms and conventions: by promoting representations that emphasize particular aspects of a group (eg, many lung cancer patients smoke) while ignoring others, media images and messages can play a role in creating shared norms and activating the use of these constructs in subsequent evaluations (Mastro, 2003; McKinley et al., 2014).
The potential for generative AI systems to reproduce and reinforce biases and stereotypes is concerning because these perspectives, hidden under the guise of technological neutrality, can influence social perceptions of reality (Gorska & Jemielniak, 2023), especially if disseminated on a large scale (Ali et al., 2024; Bianchi et al., 2023). Although modifying the prompts used could help mitigate some of the observed patterns (eg, specifying images of non-White patients to increase racial diversity), prompt engineering is generally quite limited as a solution to the complex and embedded biases in these tools (Bird et al., 2023). More systematic mitigation approaches, such as involving stakeholders in the design of these systems, using higher-quality training data, and enacting guidelines to increase transparency and accountability, are likely needed to help reduce the risk of harm from these tools (Bird et al., 2023). Furthermore, promoting greater AI literacy among communication practitioners, researchers, health care providers, and the general public may also help offset the potential harms of biased AI-generated images. Overly optimistic beliefs in the potential and objectivity of AI systems could cause individuals to perceive AI tools as more impartial or neutral than humans (Helberger et al., 2020; Klingbeil et al., 2024), and consequently be less wary of the potential for bias in the products generated by these systems. In addition, a better understanding of AI tools could help users develop more accurate perceptions of their capabilities, recognize biases in these tools, and make more appropriate decisions about what to do with the outputs they produce (Pinski & Benlian, 2024). Beyond didactic education to promote AI literacy, hands-on training may offer a tangible way to help learners understand how AI tools work. For example, a mobile app (“AiLingo”) that enables users to iteratively configure an image classification model by selecting different training data and assessing changes in the model’s performance was found to increase both subjective and objective AI literacy (Pinski et al., 2024). Finally, training in prompt engineering may help users of AI systems mitigate biases in the absence of more comprehensive solutions.
Limitations
Limitations
While this study provides some important initial insights on the way CPs are portrayed in AI-generated images, it also has several limitations. First, some coding judgments regarding image characteristics are subjective and necessarily require some degree of interpretation. To improve reliability of coding decisions, all images were double-coded, discrepancies were adjudicated by a third coder, and all coders participated in a pilot coding phase to ensure coding categories were clear and applied consistently.
In addition, data were collected over several weeks in the Spring of 2024 using the most recent versions of the AI tools then available, therefore results reflect only the outputs of these tools at a particular moment in time. Given that AI tools are constantly evolving, these results may not generalize to subsequent iterations (Ali et al., 2024). Similarly, our study only looked at images across two leading AI models and may not generalize to other tools that employ different algorithms or training data. Lastly, only 40 images were examined per prompt, and it is possible that a larger number of images generated per prompt would have revealed a different set of patterns in the images.
While this study provides some important initial insights on the way CPs are portrayed in AI-generated images, it also has several limitations. First, some coding judgments regarding image characteristics are subjective and necessarily require some degree of interpretation. To improve reliability of coding decisions, all images were double-coded, discrepancies were adjudicated by a third coder, and all coders participated in a pilot coding phase to ensure coding categories were clear and applied consistently.
In addition, data were collected over several weeks in the Spring of 2024 using the most recent versions of the AI tools then available, therefore results reflect only the outputs of these tools at a particular moment in time. Given that AI tools are constantly evolving, these results may not generalize to subsequent iterations (Ali et al., 2024). Similarly, our study only looked at images across two leading AI models and may not generalize to other tools that employ different algorithms or training data. Lastly, only 40 images were examined per prompt, and it is possible that a larger number of images generated per prompt would have revealed a different set of patterns in the images.
Conclusion
Conclusion
The utilization of generative AI offers researchers and practitioners numerous potential advantages, including the ability to efficiently generate high-quality, customized images at low cost (Buzzaccarini et al., 2024). However, the results of our study suggest that the use of text-to-image models in cancer-related research and practice may be premature, and if they are used, careful consideration must be given to their flaws and potential biases. In the cancer context, users must be aware of the potential lack of diversity in these images, as well as the possibility of perpetuating certain cancer-related stereotypes (both positive and negative) when using AI-generated images in health communication messaging, interventions, or clinical practice. Finally, while this study focused on the portrayal of cancer patients in AI-generated images, the analytic approach outlined herein may be applied to other health conditions to better understand potential biases and narratives embedded in generative text-to-image AI models related to these diseases and the larger social implications of the potential widespread dissemination of the resulting images.
The utilization of generative AI offers researchers and practitioners numerous potential advantages, including the ability to efficiently generate high-quality, customized images at low cost (Buzzaccarini et al., 2024). However, the results of our study suggest that the use of text-to-image models in cancer-related research and practice may be premature, and if they are used, careful consideration must be given to their flaws and potential biases. In the cancer context, users must be aware of the potential lack of diversity in these images, as well as the possibility of perpetuating certain cancer-related stereotypes (both positive and negative) when using AI-generated images in health communication messaging, interventions, or clinical practice. Finally, while this study focused on the portrayal of cancer patients in AI-generated images, the analytic approach outlined herein may be applied to other health conditions to better understand potential biases and narratives embedded in generative text-to-image AI models related to these diseases and the larger social implications of the potential widespread dissemination of the resulting images.
출처: PubMed Central (JATS). 라이선스는 원 publisher 정책을 따릅니다 — 인용 시 원문을 표기해 주세요.
🏷️ 같은 키워드 · 무료전문 — 이 논문 MeSH/keyword 기반
- A Phase I Study of Hydroxychloroquine and Suba-Itraconazole in Men with Biochemical Relapse of Prostate Cancer (HITMAN-PC): Dose Escalation Results.
- Self-management of male urinary symptoms: qualitative findings from a primary care trial.
- Clinical and Liquid Biomarkers of 20-Year Prostate Cancer Risk in Men Aged 45 to 70 Years.
- Diagnostic accuracy of Ga-PSMA PET/CT versus multiparametric MRI for preoperative pelvic invasion in the patients with prostate cancer.
- Association of patient health education with the postoperative health related quality of life in low- intermediate recurrence risk differentiated thyroid cancer patients.
- Early local immune activation following intra-operative radiotherapy in human breast tissue.