This article, in the form of an exposé, is a preview of my zine, The History Behind the Racial Bias in Healthcare. The article is focused on how the history and impact of skin bias in dermatology contribute to racial disparities in healthcare for people of color in the United States. This bias stems from multiple factors, from the generalized case studies that students in medical school are presented with to the lack of availability of inclusive data. These factors all contribute to the likeliness of medical professionals to misdiagnose patients whose skin tones differ from the population represented by the data.
It is essential to acknowledge and address skin bias in dermatology to improve healthcare outcomes for people of color in the United States. Addressing this issue will require not only raising awareness among healthcare professionals but also implementing structural changes in medical education, research, and practice. This article offers a comprehensive analysis of the problem of skin bias in dermatology and provides recommendations for addressing this critical issue in healthcare.
People of color, primarily Black and Hispanic, are more likely to experience skin diseases than White people in the United States.1 The factors affecting this disparity include genetics, cultural practices, socioeconomic background, access to healthcare, and the focus of this article: systemic racism.2
The difficulty doctors face diagnosing skin conditions on varying skin tones stems from the inadequate representation of diverse skin types in medical research and education.3 Medical students are not sufficiently trained to identify diseases in diverse demographics. This contributes to the systemic bias and results in a higher chance of undetected skin diseases among people of color.
The problem extends beyond dermatology and affects healthcare as a whole. Skin conditions can often be initial symptoms of developing illnesses in the body. Failure to detect the initial signs of an illness gives the opportunity for it to develop into fatal stages. This systemic bias makes patients of color more susceptible to serious illnesses and contributes to the misdiagnosis and inadequate treatment of people of color in the healthcare system. This problem creates great inequalities in our society, and there is a need for greater awareness, research, and education on the appearance of skin diseases in different skin tones to ensure people of color receive equitable care and treatment.
To understand the historical context of this bias, it is essential to not only understand the use of photography in medicine but how racism and colorism are connected to the history of photography itself. The era of photography began in the mid-19th century when the first camera obscura was invented.4
Photography is a powerful tool, often considered an objective and truthful representation of reality, but it is not immune to the biases of the society in which it is created. As Caswell demonstrates, the history of photography is intertwined with racial bias and exclusion.5 The bias favoring lighter skin in technology has created significant implications for the representation of racialized individuals.
Color film has historically been calibrated toward lighter skin tones since its introduction in the 1940s.6 The Shirley card presented here depicts a white woman with ginger-brown hair and pearly white clothing. The card was named after a white woman who wore a colorful dress. Until the digital camera was introduced, the Shirley card was used as a standard for color calibration of photographic films, wherein the accuracy of the colors in photographs depended on the skin tone calibration of the card.7 As the card was based on a white woman, its calibration was bound to be biased toward lighter skin tones.
In the 1970s, photo companies such as Kodak received complaints from furniture manufacturers and chocolate companies that their cameras were not able to accurately capture the variations in dark-grained wood and tones of brown in chocolate, respectively.8 Kodak responded with the emergence of multiracial Shirley cards, which included black, Asian, and soon, Latino races, as seen here.9 Although this decision was a step toward inclusivity, the concerns were due to petitions from industry, rather than any compelling social concern to equitably represent all people.
As the film and television industries included more diverse casts, the issues attached to color balancing at a professional level became more apparent.10 In the 1990s, the Holland-based company, Philips, developed a camera system with two computer chips to balance between lighter and darker tones individually. With this and the emergence of the digital camera, film-based photography halted development, and the Shirley card ceased to be used.
While significant developments were made in diversifying the accuracy of color photography, the bias towards lighter skin is still prevalent in technology today. Digital sensors automatically calibrate to the lightest area of the frame, resulting in, for example, the inability of facial recognition systems to recognize dark-skinned individuals.11
During this time period, medical professionals used photography to document and study various skin conditions and diseases. The photographic film and development processes used, however, were designed to produce images with accurate color balance for light skin tones. These processes were not equipped to capture the range of skin tones present in people of color. As a result, photographs of this population often appeared dark or discolored. This meant people of color were misrepresented or excluded entirely, making it difficult for medical professionals to accurately diagnose and treat skin conditions in patients of color.
This issue was compounded by the fact that medical professionals at the time often relied on visual cues and skin color to diagnose skin conditions. The lack of accurate photographic representation of people of color in medical literature perpetuated skin bias by reinforcing the idea that white skin was the standard and any deviations from that norm were considered abnormal or pathological.
Even as photographic technology improved over the years, the bias towards white skin persisted. It wasn’t until the 1970s that medical photography began to reflect a more diverse range of skin tones. However, the legacy of skin bias in medical photography continues to impact healthcare today, as misdiagnosis and inadequate treatment continue to be a problem for people of color.
The historical bias towards white skin in medical photography has contributed to the persistence of skin bias in healthcare. Addressing this issue requires both awareness of the history of skin bias and a commitment to more accurately represent the diversity of skin tones in medical photography and healthcare more broadly.
White supremacist thinking frames white skin as “normal,” central, and therefore to be calibrated around and studied. It also dictates who has access to capital and education, who becomes doctors and technologists, and who funds and performs research. A 2018 report from the Association of American Medical Colleges found that 56% of physicians in the United States are white, compared to only 5% who are Black.
Dermatological research, too, has traditionally been focused on white skin. A study published in the Journal of the American Academy of Dermatology found that only 4.5% of skin-related clinical trials in the United States from 2011 to 2018 included non-white participants. Over 80% of participants in clinical trials for psoriasis were white, despite the fact that the disease affects people of all races and ethnicities.
The manifestation of skin conditions varies significantly among individuals with different skin tones, thereby necessitating the need for individualized diagnostic approaches as opposed to a standardized, one-size-fits-all template. This also holds true for research on treatment efficacy – the effectiveness and potential side effects of treatments differ based on skin tone and cannot be generalized. The lack of diversity both in research samples and in the demographics of medical practitioners limits the generalizability of findings, the accuracy of diagnoses, and the effectiveness of treatments for patients of color.
Artificial intelligence and machine learning algorithms are increasingly being used in healthcare, to diagnose cancer for example. These technologies have the potential to assist doctors in the diagnosis process and fill gaps in their knowledge and experience. This historical bias in photography transpires here too, as the algorithms are often trained on images of white or light skin and instead, have the potential to continue to reinforce racial bias in dermatology.12
A radical shift is needed in the conduct of dermatology research and practice. This means moving beyond a white-centric framework and prioritizing the health and well-being of all patients, regardless of their race or ethnicity. This requires a concerted effort to diversify clinical research, educate providers on the nuances of diagnosing and treating skin conditions in diverse patient populations, and develop algorithms and technologies that are specifically designed to address racial and ethnic differences in dermatology. Only then can we begin to dismantle the systemic racism that continues to pervade the field of dermatology, and ensure that all patients receive the care they deserve.