Governing AI: Pakistan’s Ambitious Policy and Its Untold Gaps

Artificial Intelligence (AI) has emerged as one of the most transformative forces of the 21st century, reshaping economies, governance, education, and even human relationships. Recognizing its potential, Pakistan recently approved its National Artificial Intelligence Policy 2025 on July 31, 2025. The policy, as analyzed by Usama Khilji in his Dawn article “Governing AI,” lays out a bold framework to harness AI for national progress.

Yet, as promising as the blueprint appears on paper, it raises significant concerns about inclusivity, transparency, ethics, and long-term viability. This blog takes a deep dive into the opportunities and challenges of governing AI in Pakistan, drawing on Khilji’s critique and broader global lessons.


The Ambitious Blueprint: What the Policy Promises

The National AI Policy 2025 outlines six major pillars, aiming to catapult Pakistan into the digital age:

  1. National AI Fund – Allocation of 30% from Ignite’s R&D fund to finance AI innovation.

  2. Centers of Excellence – Advanced hubs for AI research and industry collaboration.

  3. AI Council & Directorate – A national body to oversee implementation, regulation, and policy evolution.

  4. Regulatory Sandboxes – Safe spaces for startups and innovators to experiment under controlled conditions.

  5. Infrastructure Development – National-level compute and data systems to support AI adoption.

  6. Sectoral Pilots – Projects in healthcare, education, agriculture, and governance to demonstrate AI’s real-world benefits.

The policy also sets ambitious human capital targets: training 1 million professionals by 2030, offering 3,000 scholarships annually, and establishing paid internship opportunities. By embedding ethics, cybersecurity, and global standards into its vision, the government promises an AI ecosystem that is both future-ready and internationally credible.


The Missing Voices: A Top-Down Approach

Despite its strengths, the policy suffers from one glaring flaw: the absence of inclusivity in policymaking.

  • Civil society, digital rights experts, journalists, educators, and healthcare professionals were not consulted.

  • Parliamentary debate—especially within the IT committees—was bypassed, depriving the policy of democratic legitimacy.

  • Local communities most vulnerable to AI disruption were excluded from the conversation.

Without meaningful input from diverse stakeholders, the policy risks becoming a technocratic exercise rather than a people-centered framework.


The Digital Divide: An Unaddressed Barrier

A robust AI policy must address who has access to digital technology—and who doesn’t. In Pakistan, this divide is stark:

  • Mobile internet shutdowns, such as the three-week blackout in Balochistan, highlight how fragile digital access remains.

  • Thousands of schools, including 1,482 in Khyber Pakhtunkhwa, lack basic computer labs.

  • Rural populations, women, and marginalized groups remain disproportionately excluded.

The AI policy acknowledges the need for training but ignores the preconditions: reliable internet, devices, electricity, and updated curricula. Without bridging this gap, AI will likely reinforce inequality rather than reduce it.


The Ethics & Human Rights Question

Digital rights organizations, including the Digital Rights Foundation, have raised serious concerns about the policy’s lack of ethical safeguards.

  • No human rights impact assessments (HRIAs): These are standard in many countries to ensure AI doesn’t harm vulnerable groups.

  • Opaque implementation: The process of allocating funds and approving projects lacks clear accountability.

  • Marginalized communities overlooked: Instead of empowering them, AI may reinforce biases and deepen exclusion.

AI in policing, surveillance, or governance—without ethical guardrails—could pave the way for state overreach and privacy violations.


Gendered Dimensions: Invisible Risks

Though the policy highlights training opportunities for women and persons with disabilities, it fails to address gender-based digital violence.

Technology is increasingly weaponized against women in Pakistan:

  • AI-generated deepfake videos are used to harass, blackmail, and silence women.

  • Online platforms already lack robust safeguards against abuse.

  • Survivors face stigma, weak legal protection, and limited recourse.

As one expert put it:

“In the past, men would throw acid to destroy a woman’s life. Today, one AI-generated video can do the same.”

Unless the AI policy actively integrates protections against tech-facilitated gender violence, it risks worsening existing social inequalities.


Linguistic Inclusivity: A Forgotten Priority

AI in Pakistan cannot flourish without linguistic diversity. Yet, the policy makes no commitment to developing AI models for local languages—Urdu, Punjabi, Sindhi, Pashto, Balochi, and Seraiki.

This omission:

  • Reinforces dependence on foreign tech firms, whose models rarely reflect local culture or nuance.

  • Limits AI accessibility for the majority who are not fluent in English.

  • Misses an opportunity to build indigenous AI tools rooted in Pakistan’s social realities.

True digital sovereignty requires AI that speaks the language of its people.


Looking Ahead: What Governing AI Should Mean

Pakistan’s National AI Policy 2025 represents a critical step forward. But to succeed, it must evolve beyond ambition into a living, inclusive, and ethical framework.

Key steps moving forward:

  • Inclusive Policymaking: Bring civil society, academia, and marginalized communities to the table.

  • Bridging the Digital Divide: Ensure equitable infrastructure access before scaling AI.

  • Ethical Oversight: Establish an independent AI ethics body with teeth.

  • Gender Protection: Build safeguards against deepfakes and online harassment into the policy.

  • Local Language AI: Fund research into indigenous language models to make AI accessible.


Conclusion: Between Promise and Peril

AI offers Pakistan immense potential—from transforming healthcare delivery to making governance more efficient. But without inclusivity, ethics, and rights-based safeguards, it risks entrenching existing inequalities and fueling social harms.

As Usama Khilji’s analysis shows, Pakistan stands at a crossroads: will AI become a tool for empowerment and innovation, or another instrument of exclusion and control?

The answer lies not in the ambition of policies, but in the inclusivity of their governance.


👉 This blog is based on Usama Khilji’s article “Governing AI,” published in Dawn on September 4, 2025. Expanded for deeper understanding of Pakistan’s AI journey.

Leave A Comment