Do You Remember the Early Days of Social Media?
It was while working on an assignment for my AI Governance course with BlueDot Impact that I stumbled on an uncomfortable truth: much of the AI discourse generated by large language models (LLMs) on governance, policy, and ethics is heavily tilted towards the United States.
The assignment itself was straightforward: explain the AI Triad (algorithms, data, compute) — the three core building blocks of AI systems. But as I experimented with two different AI models to draft a policy brief, I noticed something striking. Instead of producing globally balanced perspectives, the answers leaned toward Washington-style narratives of national security and U.S.–China rivalry.
This article grows out of that realization. It is not just about the AI Triad as a technical framework, but about how the “data” pillar of AI also shapes the way policy itself is narrated.
Claude: A Tutor’s Approach
Claude assumed I already knew something about the AI Triad. Instead of rushing into a polished answer, it asked clarifying questions:
- Which policymaker is this for?
- What’s the goal — to educate, advocate, or raise awareness?
This was helpful. Claude broke the Triad into multiple frameworks — technical, governance, and strategic. It even highlighted policymaking levels: local, state, federal, and international.
For a learner like me, this was valuable. It showed that the AI Triad isn’t one rigid concept. It can be understood differently depending on context.
Grok: US Centric Narrative
Grok’s response felt very different. It read like a polished U.S. policy memo.
It defined the AI Triad in dense technical terms (“pattern prediction,” “trial-and-error optimization,” “Moore’s Law”), tied it directly to U.S.–China competition, and then proposed policy recommendations such as expanding H-1B visas and tightening export controls.
It ended with a striking claim:
“The AI Triad demystifies AI’s black box, empowering policymakers to intervene precisely rather than reactively. Proactive governance will not only safeguard national security but position the U.S. as the global AI steward.”
Questioning the Bias
At that point, I challenged Grok directly:
Me: “You talk about U.S. global leadership; what informed your analysis? Why did you not talk from a more global context?”
Its reply was candid:
Grok: “This perspective is common in English-language policy resources and aligns with prominent AI policy debates that often center on U.S. national security and global leadership, given the geopolitical stakes (e.g., competition with China in AI development).”
Later, I pressed again:
Me: “Could I say that LLMs like Grok are deeply influenced by media discourses which in turn leads to structural bias?”
And Grok acknowledged:
Grok: “Yes, you could argue that large language models (LLMs) like me are influenced by media discourses, which can contribute to structural biases… The AI Triad’s ‘data’ pillar is the primary culprit. The bias is structural, not intentional, and stems from data curation challenges rather than deliberate design.”
This exchange made the bias visible: not deliberate, but structural, baked into the data the model had been trained on.
Three Voices, Three Framings
This becomes clearer when you compare three voices side by side:
U.S. policy framing (CISA):
“The U.S. government must marshal a national effort to defend critical infrastructure and government networks and assets, work with partners across government and industry, and expand services for federal agencies and operators.”
Grok echoing U.S. dominance:
“…position the U.S. as the global AI steward.”
India’s inclusive framing (#AIforAll):
“#AIforAll will aim at enhancing and empowering human capabilities to address the challenges of access, affordability, shortage and inconsistency of skilled expertise… India should strive to replicate these solutions in other similarly placed developing countries.”
One frames AI as a matter of security and competition.
The other frames AI as a tool for inclusion and development.
Both are legitimate — but only one tends to dominate the conversation, because of language, platforms, and media power.
Lessons on Mitigating Structural Bias
My experiment also revealed something hopeful: bias in LLMs is structural, but not fixed. When I pushed Grok to move beyond its U.S.-centric framing, it acknowledged the limitations and suggested ways to broaden the perspective.
Here are a few lessons that stand out:
- Diversify Training Data
Most AI models are overfed with U.S. and Western English-language sources. Incorporating non-English and Global South policy documents, media, and research outputs would reduce skew. - Prompt Engineering Matters
Users can play an active role. By explicitly asking for global or regional perspectives, as I did, we can force models to surface underrepresented narratives. - Amplify Non-U.S. Narratives
Countries like India, UAE, and Singapore are not absent from AI policy debates — their efforts are simply less amplified. Governments, think tanks, and academics from the Global South need to actively publish in English, contribute to global forums, and circulate their work in media ecosystems where AI models scrape their training data. - Policy Interventions
AI governance itself can address these gaps — for example, requiring transparency around datasets, promoting open repositories of policy documents, and encouraging partnerships with Global South institutions.
Why This Matters
The AI Triad (algorithms, data, compute) helps explain this structural bias:
- Algorithms are optimized to reproduce what’s statistically likely in English-language sources.
- Data is dominated by U.S. think tanks, media, and policy reports.
- Compute amplifies what’s most visible online — again, U.S. narratives.
The outcome: even when you ask for a “global brief,” the LLMs talk more about US policies.
A Reflection from India and the Global South
For India, building AI infrastructure is not enough. We also need to invest in think tanks, AI policy institutes, and public debate.
Right now, Indian media celebrates milestones — like space missions — but it rarely sustains discussions on the social aspects of AI: ethics, regulation, governance, and citizen participation. These must become national conversations — on television debates, in English-language newspapers, and across civic platforms.
The media’s role is to educate, excite, and sustain participation. The more active our media becomes, the more India’s voice will resonate globally.
AI cannot remain confined to government corridors. It has to enter the public imagination. Just as we proudly discuss space missions, we must also debate AI’s ethical risks, regulatory challenges, and opportunities for inclusion.
Only then will India’s achievements — and its frameworks like #AIforAll — stand alongside U.S. narratives on the global stage.