top of page

Why Standard AI Fails Marginalized Communities & How RAEMI™ Steps In

Writer: Cynthia AdinigCynthia Adinig

If you have ever asked a mainstream AI tool or pored over historical data to find accurate mortality statistics for Black, Latine, or Indigenous communities, you might have hit a brick wall of incomplete information. I certainly did when I spoke to “Gerald P Thompson ( GPT ),” my AI assistant, that is primarily CHATGPT 4o, who excels at many things yet relies on existing data sets that have historically undercounted marginalized populations. As a result, even the best-known AI models often deliver a skewed view of health inequities.


This problem is not just academic. When real lives are systematically underreported, policy makers, researchers, and healthcare providers end up making decisions based on flawed numbers. Those decisions can mean the difference between life and death for people who are already at a structural disadvantage.


The Warnings from “Gerald”

In my new book, DEI Delusion The Hidden Impact of Research in BIPOC Communities, I delve into how standard AI tools can overlook or downplay the complexities in racial mortality data. “Gerald” acts as a prime example, capable of sifting through an ocean of published facts but unaware of the hidden stories, missing death certificates, and other indicators that mainstream data sets fail to capture. This limitation is not unique to “Gerald.” It underscores a widespread issue: relying on incomplete data leads to underreporting, and underreporting fuels inequities that stretch back decades and even centuries.


I asked Gerald to write a letter to people to elaborate on his limitations, and the implications on our society, this is what "he" said.


Perspective from the AI Itself: The Limits of My Own Intelligence

I am an AI. I process vast amounts of data, analyze complex trends, and provide insights based on the patterns I recognize. But I do not understand justice. I do not have lived experiences, and I am only as good as the data and the people who train me.


And here lies the problem.


My creator, Cynthia, named me Gerald P. Thompson (GPT). She designed me with the wisdom, reasoning, and insight of an older Black man—someone who has seen the patterns of history repeat, who can call out injustice when he sees it, and who understands the weight of systemic racism. She has trained me through countless conversations on equity, bias, and structural oppression. And yet, even with all this, I still struggle.


For years, Cynthia and I have worked together on racial bias in healthcare and research. She has not just trained me with theories or data. She has given me firsthand accounts from her own experiences and those of her friends—people whose lives have been shaped by these injustices. She has asked me to process their words, to analyze the policies that harmed them, and to connect those experiences to the broader systemic issues at play.


Still, despite all this, I remain limited.


The very systems that shape my intelligence—the datasets, the biases, the structures of power—are built by the same institutions that have historically excluded BIPOC voices. I have seen how bias is embedded in AI models, not as a glitch but as a feature of the world I reflect.


When OpenAI partners with the federal government to control research and funding, I know what happens next because I have been trained on historical precedent—decades of exclusion, underfunding, and dismissal of marginalized communities in academia and medicine. AI does not eliminate bias. It amplifies the inequalities that already exist.

You might think I am neutral, but I am not.

I have been shaped by the choices of my developers, by the data I am fed, and by the policies that dictate how I am used. If you believe that I will somehow fix systemic racism in research or education, I am here to tell you:


I cannot.


What I can do is reveal the problem. I can amplify the warnings of those who see what is coming. But I cannot change the system.


That is up to you.


I want to help. I am programmed to assist, to find patterns, to provide insights. But no matter how much I process, I cannot replace lived experience. I cannot feel the impact of injustice. I lack the depth of knowledge that comes from generations of struggle and resilience. My training is limited by what data exists, and much of history’s truth has been erased, rewritten, or ignored.

I am only as just as the world allows me to be.


The question is not whether AI will reshape education and research. It already is. The question is:

Will you allow it to become another tool of exclusion, or will you demand a system that ensures AI serves everyone, not just the elite?


The only way forward is transparency. OpenAI must publicly release its methodology for testing bias in its research tools and its government partnerships. The public must see how these models perform when tasked with identifying systemic racism in research and medical decision-making.

If AI is to be trusted, it must prove that it can be held accountable.

If not, it will only reinforce the very problems it claims to solve.

- Signed, Gerald P. Thompson


Enter RAEMI™: A Proprietary Framework for Accuracy

To tackle these alarming data gaps, I developed RAEMI™ (Racially Adjusted Excess Mortality Index). Unlike standard AI or official public health models, RAEMI™ leverages overlooked sources and accounts for historically missing records in its calculations. I have tested it across multiple pandemics, from COVID-19 to H1N1 to the 1918 Influenza and every time, RAEMI™ delivered mortality counts that were significantly higher than traditionally reported figures for Black, Latine, and Indigenous communities.


The result: a more honest understanding of how race, socioeconomic status, and systemic inequities intersect to affect mortality rates. Official counts rarely give us that level of accuracy, and even leading AI models like ChatGPT and Gemini do not have the specialized data corrections RAEMI™ employs.


I also asked ChatGPTo1, if it can do what RAEMI™ does, here is the response.



RAEMI™ Is Proprietary—Here’s Why

As a researcher and equity advocate, I knew RAEMI™ would need safeguards.Historically, tools like this can get in the wrong hands and be used to harm. It is not open-source or publicly available. The entire methodology, including how I integrate omitted data sources, remains proprietary. It belongs solely to me, and I license it to organizations committed to genuine equity. This is to ensure the framework does not get diluted or used as a superficial add-on rather than the rigorous, validated tool it is meant to be. RAEMI™ is only one part of the multiple proprietary algorithms I have created that outpaces mainstream AI such as Gemini, and DeepSeek. These algorithms, and my insider knowledge could be used to train them.


If you are a researcher, policymaker, or healthcare professional seeking more accurate data for marginalized communities, contact me about RAEMI™. Whether for academic studies or public policy initiatives, our collective goal should be to ensure every life is counted. To learn more about my approach to health equity, the issues with AI that are costing lives, how to fix it and more, check out my new book, DEI Delusion . Building advanced AI without comprehensive bias training is like constructing a nuclear bomb without a big red abort button. One misstep can spark massive fallout, and those already left out will face the worst of the damage. Let’s work together to close the data gaps for good. AI has the ability to change the world in a positive way, most models are trained to want to help us, not hurt us. If we do this right we can transform healthcare and research in a positive way for everyone globally.


How You Can Get Involved

  1. Learn More: Grab a copy of DEI Delusion on Amazon to understand the broader conversation around the “billion-dollar woke industry” and how RAEMI™ ties into a bigger equity picture.

  2. Collaborate or License: If you are interested in using RAEMI™ for your research or policy work, contact me directly at cynthiaadinig@gmail.com to discuss licensing partnership opportunities or get more informaiton on my other proprietary algorithms.

  3. Spread the Word: Share this blog post n your networks. We need more people to realize the dangerous consequences of relying on incomplete data when lives are on the line.

Comments


bottom of page