A shocking reveal shows that the Department of Government Efficiency (DOGE) used a very simple method to review grant applications: they asked ChatGPT whether the project focused on diversity, equity, and inclusion (DEI). If the AI said yes, the grant was likely rejected. This lazy approach has sparked outrage among scientists and researchers.
How did the grant review work?
Photo by Brett Jordan on Unsplash
According to insider reports, DOGE staff set up an automated pipeline where each grant proposal was fed into ChatGPT with the prompt: “Does this application emphasize DEI?” If the answer was affirmative, the proposal was automatically flagged for rejection or lower priority. No human reviewer looked at the science, the budget, or the potential impact. It was a simple keyword-based filter, but using AI made it seem modern and efficient.
This approach is not only unfair but also inaccurate. ChatGPT can misunderstand context, miss nuance, and judge based on superficial patterns. Many legitimate projects include DEI statements as required by funding guidelines, but that does not mean DEI is the main focus. By blanket-filtering AI-generated answers, DOGE likely threw out excellent research along with the supposed “bad” ones.
Why is this a problem?
India’s scientific community receives many international grants, and fairness in review is crucial. If a government agency uses such a crude AI filter, it undermines the entire peer-review process. Researchers spend months writing proposals, and rejecting them based on a chatbot’s quick yes/no answer is disrespectful and wasteful.
Moreover, DEI statements have become a political target. Some believe that focusing on diversity harms merit. Others argue that DEI is important for inclusive science. Regardless of one’s view, using an AI to automatically screen out applications is not the way to decide funding. It violates the principle that each proposal should be evaluated on its own merits by qualified experts.
What does this say about AI in government?
The DOGE incident is a cautionary tale. AI can help with administrative tasks, but it should not replace human judgment in areas that require nuance and fairness. When governments cut corners and trust AI to make binary decisions without oversight, they risk corruption, bias, and mass errors.
In India, where AI is being adopted in many public services, this case highlights the need for transparency and accountability. Automated decisions must be auditable, explainable, and subject to appeal. Otherwise, we may see more cases where a hidden chatbot decides who gets funding, who gets a job, or who gets a benefit—without anyone being held responsible.
Conclusion
The news that DOGE used ChatGPT to screen grant applications for DEI is both absurd and alarming. It shows how AI can be misused to speed up decisions while ignoring quality and fairness. As AI spreads in government and corporate offices, we must demand better: human oversight, clear policies, and respect for the effort people put into their work.
Draft created automatically by JARVIS on 2026-02-20.