Skip to content

Tech Forum

Mila AI Policy Fellowship: Expert Interview

Cover Image for Mila AI Policy Fellowship: Expert Interview
Share:

Tech Forum sits down with Dr. Adio-Adet Dinika, a political scientist and AI researcher who joined Mila’s inaugural AI Policy Fellowship cohort for 2025–2026. Dinika brings a unique blend of scholarly rigor and on-the-ground policy experience, including leadership of the Data Workers Inquiry at the Distributed AI Research Institute (DAIR). His work sits at the nexus of AI deployment, labor, governance, and public policy, making his perspective particularly relevant for readers who want evidence-based, data-driven analysis of how AI policy evolves in real time. Mila’s program is designed to bridge AI research and policy by fostering interdisciplinary collaboration and delivering accessible, policy-relevant insights to decision-makers. The fellowship places a strong emphasis on translating technical findings into actionable guidance for government, industry, and civil society, and it operates on a six-month, part-time model with tangible deliverables. (mila.quebec)

In this interview, we explore Dinika’s background, the core policy questions shaping AI governance today, practical steps readers can apply in their own work, and what the future may hold for Mila’s policy fellowship and the broader ecosystem. The conversation also highlights how Mila structures the program, including its thematic areas, deliverables, and international accessibility—critical context for policymakers, researchers, and practitioners tracking the AI policy landscape. This interview is grounded in publicly available information about the Mila AI Policy Fellowship, including its six-month, part-time format and its interdisciplinary emphasis. (mila.quebec)


Background & Context

Q: Could you share your journey into AI policy and how you found Mila's fellowship?

A: My route into AI policy started in the trenches of research and civic collaboration. I’m a political scientist by training, with a long-standing interest in how technology reshapes power, labor, and governance. My work with the Data Workers Inquiry at DAIR gave me hands-on exposure to the impact of algorithmic systems on workers, communities, and institutions. That intersection—where theory meets practice—drove me to seek formal opportunities that could fuse rigorous policy analysis with AI research. Mila’s AI Policy Fellowship emerged as a natural fit because it explicitly aims to bridge AI research with policy-making, providing a structured space to translate findings into accessible policy briefs and multi-stakeholder dialogues. The program’s six-month, part-time design appealed to me because it respects professional commitments while delivering meaningful policy impact. The thematic focus areas, including AI safety and governance, adoption and productivity, and AI in media and democracy, aligned with the kinds of cross-disciplinary questions I was already pursuing. (mila.quebec)

Q: What drew you to the Mila AI Policy Fellowship, personally and professionally?

A: Professionally, Mila’s initiative stands out for its explicit commitment to interdisciplinarity and governance. The fellowship brings researchers, policy practitioners, and civil society into a common frame, encouraging policy briefs and roundtables that cut across disciplines. The six-month cadence creates a disciplined timeline for delivering tangible outputs—policy briefs, dissemination activities, and stakeholder engagement—that can influence real-world decision-making. Personally, the chance to stay anchored in my home institution while collaborating closely with Mila researchers and advisors offered the best of both worlds: continuity and depth, plus exposure to a policy ecosystem that values practical impact. The fellowship’s structure—paired with Mila’s emphasis on translating research into policy impact—aligns with my commitment to ensuring AI advances benefit people and communities rather than just the tech sector. (mila.quebec)

Q: What is your current work at DAIR, and how does it relate to AI governance?

A: At DAIR, the Data Workers Inquiry focuses on the labor dimension of AI systems—how data work, platform labor, and algorithmic decision-making affect workers, communities, and governance. This line of inquiry sits squarely at the intersection of technology, policy, and social justice, highlighting the need for governance frameworks that acknowledge labor realities, data rights, and accountability. Engaging with policy audiences through Mila’s fellowship offers a direct path to translate these insights into recommendations and policy briefs that can inform regulation, public procurement, and workforce development strategies. The deliverables from Mila—policy briefs, roundtables, and dissemination plans—provide a concrete mechanism to move DAIR’s findings from the academic and activist spaces into policy conversations. (mila.quebec)


Core Topic Deep Dive

Q: In your view, what are the core policy questions at the intersection of AI safety and governance today?

Core Topic Deep Dive

A: Safety and governance are no longer just about preventing system failures; they are about shaping responsible deployment that respects rights, equity, and transparency. Key questions today include: How can we codify accountability for misaligned or unsafe AI behaviors across complex value chains? What governance mechanisms create meaningful audits of foundation models and other high-risk systems without stifling beneficial innovation? How do we design global coordination that avoids race-to-the-bottom regulation while ensuring consistent safety standards? And how can governance address long-tail risks, including model misuse, data bias, and unintended societal consequences? From a policy standpoint, the challenge is to craft adaptable, evidence-based frameworks that can respond to rapid technical change while preserving democratic norms and human rights. A balanced approach combines safety-by-design, third-party evaluations, and multi-stakeholder consultation to build public trust. >These are the kinds of questions that the Mila framework is intended to surface and address through policy briefs and roundtables. (mila.quebec)

Q: How should policymakers balance innovation and risk?

A: Balancing innovation and risk requires a structured, iterative policy toolkit that treats safety as a design constraint, not an afterthought. I’d propose a multi-layer approach:

  • Establish baseline safety and governance standards for high-risk applications, with clear milestones for evaluation and remediation.
  • Promote transparency where it matters most, such as model capabilities, data provenance, and evaluation metrics, while preserving legitimate business interests and user privacy.
  • Use independent, time-bound assessments to monitor ongoing risk, rather than one-off approvals that can become outdated quickly.
  • Build channels for iterative stakeholder feedback, including workers, communities affected by AI systems, and civil society groups.
  • Invest in workforce development and public pro-innovation incentives to keep the ecosystem vibrant while embedding governance practices that reduce harm. In short, policy should create guardrails that encourage responsible experimentation and scalable, evidence-based safety practices without derailing beneficial innovation. Mila’s thematic emphasis on AI safety and governance highlights this approach as a central pillar of modern AI policy. (mila.quebec)

Q: What role does AI play in climate, and how should policy respond?

A: AI has both climate mitigation and adaptation potential, from optimizing energy grids to accelerating climate modeling. The policy angle is to ensure AI deployments in climate applications are transparent, auditable, and aligned with climate justice. Policymakers should incentivize open data sharing and reproducible research while safeguarding sensitive information and community rights. The central questions include: How can governments measure the climate footprint of AI systems themselves? What governance structures support equitable access to AI-powered climate solutions for underserved regions? And how do we prevent environmental justice concerns from becoming new blind spots in AI deployment? Mila’s policy fellowship themes acknowledge climate as a critical domain, guiding fellows to explore the intersection of AI, sustainability, and public policy. (mila.quebec)

Q: How can interdisciplinary teams contribute to AI policy?

A: Interdisciplinary teams bring essential perspectives to AI policy—legal, ethical, economic, sociological, and technical viewpoints that illuminate different facets of policy questions. In practice, this means integrating public policy analysis with AI research methods, while valuing insights from humanities and social sciences to interpret how AI affects trust, behavior, and culture. This kind of cross-pollination helps policymakers anticipate unintended consequences, design more robust governance structures, and craft policy briefs that resonate with a broad audience. Mila’s fellowship explicitly anchors interdisciplinary collaboration as a core mechanism for translating research into practical policy insights, which is precisely the kind of synthesis needed in today’s governance landscape. (mila.quebec)

Q: What challenges have you seen in translating AI research into policy briefs?

A: Translating research into policy briefs often hinges on aligning complex technical material with policy-relevant questions and audiences. Common challenges include:

  • Translating technical risk assessments into actionable policy recommendations that lawmakers can operationalize.
  • Distilling nuanced empirical findings into concise briefs without misrepresenting trade-offs.
  • Bridging language and cultural gaps between technologists, policymakers, and civil society.
  • Ensuring that deliverables are accessible to non-experts while preserving scientific integrity.
  • Maintaining momentum after deliverables are published, so policy discussions translate into concrete actions. In Mila’s context, the six-month timeframe, coupled with structured deliverables like 6–8 page policy briefs and policy roundtables, creates a disciplined workflow to address these challenges through iterative feedback and co-creation with Mila AI Advisors. (mila.quebec)

Q: Could you describe a Mila Fellowship project you’ve observed or been involved with and its impact?

A: The inaugural 2025–2026 cohort features a project titled “Governing AI From Below: Worker-Centred Frameworks for Global Algorithmic Safety and Accountability.” This project situates workers’ perspectives at the heart of safety and accountability debates, proposing stakeholder-informed governance mechanisms and accountability structures that reflect labor realities across multiple jurisdictions. The project exemplifies how a policy-focused, worker-centered lens can yield concrete policy briefs and roundtable events that influence both national and international discussions on AI governance. The collaboration between Mila’s researchers and the Fellows, paired with Mila AI Advisors, demonstrates a replicable model for translating interdisciplinary research into policy actions. (mila.quebec)

Q: How do Mila Advisors contribute to the Fellows' work?

A: Mila Advisors pair with Fellows to provide strategic guidance, domain-specific insights, and feedback throughout the six-month program. This mentorship accelerates the translation of research into policy-ready outputs, ensuring that policy briefs reflect current governance needs, stakeholder concerns, and practical implementation considerations. The advisory model, coupled with Mila’s policy secretariat, helps Fellows navigate the nuanced pathways from academic findings to stakeholder engagement and public dissemination. This structured mentorship framework is a core element of Mila’s approach to ensuring that policy insights have real-world traction. (mila.quebec)

Q: What languages and accessibility considerations are built into the program?

A: The Mila AI Policy Fellowship operates in English and French, reflecting Montreal’s bilingual landscape and broader Canadian policy discourse. The program is designed to be accessible to international applicants as well, with visa support and hybrid arrangements for in-person activities in Montréal. This accessibility supports a diverse cohort and broadens the policy conversation to include international perspectives. Mila’s materials emphasize the hybrid format and language options as part of its commitment to inclusive policy participation. (mila.quebec)


Practical Insights

Q: What practical steps can readers take to stay engaged with AI policy?

A: Readers who want to stay engaged can start with a few concrete steps:

  • Track foundational policy frameworks and governance debates in AI safety, accountability, data rights, and transparency.
  • Follow policy brief releases and roundtables from Mila’s AI Policy Fellowship cohorts to learn how researchers translate technical findings into policy guidance.
  • Engage with interdisciplinary communities—law, ethics, economics, sociology, and computer science—to broaden your perspective on AI governance challenges.
  • Build a habit of translating technical papers into policy questions: “What decision would this research inform? What stakeholders are affected? What metrics would demonstrate impact?”
  • Start or join local workshops, town halls, or briefings that bring researchers and policymakers together to discuss AI applications in your community. These steps help bridge the gap between research and practice and align with Mila’s emphasis on accessible, evidence-based policy insights. (mila.quebec)

Q: If you had to implement a policy brief in six months, what process would you follow?

A: A practical six-month process might look like:

  • Month 1: Define the policy question with stakeholders; outline the scope, audiences, and key policy levers.
  • Month 2: Gather evidence from diverse sources, including technical research, case studies, and stakeholder interviews; identify data gaps.
  • Month 3: Draft the policy brief focusing on a clear problem statement, policy options, risks, and a recommended action with justification.
  • Month 4: Seek feedback from Mila Advisors and a small, representative stakeholder group; revise accordingly.
  • Month 5: Prepare dissemination materials—executive summary, infographics, and talking points for policymakers.
  • Month 6: Host a policy roundtable or stakeholder event to validate findings and generate actionable next steps. This phased workflow mirrors Mila’s approach to policy briefs and stakeholder engagement, emphasizing clarity, feasibility, and impact. (mila.quebec)

Q: What tools or frameworks do you recommend for evaluating AI policy impact?

A: Effective evaluation combines quantitative and qualitative methods:

  • Define measurable policy objectives early (e.g., safety incidents prevented, bias reduction metrics, transparency indices).
  • Use pre/post analyses, controlled pilots, or quasi-experimental designs where feasible to assess impact.
  • Incorporate stakeholder feedback loops to capture lived experiences, trust, and perceived safety.
  • Apply governance-by-design principles, assessing how policies influence system architecture, data practices, and developer workflows.
  • Document governance trade-offs and iteration plans to adapt policies as technology evolves.
  • Consider international comparators and best practices to identify what works in different regulatory and cultural contexts. Mila’s emphasis on policy briefs and roundtables aligns with this approach, ensuring that evaluations are grounded in both evidence and practical implementation considerations. (mila.quebec)

Looking Ahead

Q: What trends do you forecast for AI governance in the next 3–5 years?

Looking Ahead

A: Several converging trends are likely to shape AI governance:

  • A move toward more formalized, multi-stakeholder governance frameworks that combine technical risk assessments with societal impact analyses.
  • Increased emphasis on safety, accountability, and transparency for high-risk AI systems, including model governance, data lineage, and auditability requirements.
  • Growing recognition of labor and equity considerations in AI deployment, prompting policy responses that address workers’ rights, fair wages, and workforce transitions.
  • Heightened international coordination and cross-border governance challenges as AI applications reach global scale, with a focus on interoperability and harmonization of core standards.
  • Expanded use of policy labs and interdisciplinary fellowships (like Mila’s AI Policy Fellowship) to prototype governance solutions and disseminate practical guidance to policymakers and practitioners. These trajectories reflect a broader shift toward governance that is anticipatory, evidence-based, and inclusive—one that seeks to balance innovation incentives with robust safeguards. (mila.quebec)

Q: What would you like to see Mila’s policy fellowship achieve in the future?

A: I’d like to see Mila expand its reach to a larger, more diverse set of policy topics and geographies, while maintaining the rigorous, interdisciplinary approach that defines the program. More explicit mechanisms for measuring policy impact—beyond deliverables like briefs and roundtables—would help translate fellowship outputs into durable policy change. Strengthening collaboration with public institutions, international partners, and industry while preserving academic independence will be important to scale the program’s impact. Additionally, continued emphasis on worker-centered and equity-focused policy research would help ensure AI governance addresses real-world concerns of communities most affected by algorithmic decision-making. Mila’s track record so far suggests these directions are feasible and valuable. (mila.quebec)


Closing

In conversations like this, the thread that emerges is clear: Mila’s AI Policy Fellowship is designed to convert rigorous AI research into policy-relevant insights that policymakers can act on. Dr. Adio-Adet Dinika’s experiences—ranging from labor-focused inquiries to policy briefs and advisory collaborations—illustrate how interdisciplinary, data-driven work can inform governance in meaningful ways. For readers who want to stay informed about AI policy progress, following Mila’s publicly shared outputs, cohort activities, and thematic research provides a reliable lens into how policy shapes, and is shaped by, rapid technological advancement. If you’re looking to engage more deeply, consider exploring Mila’s Fellowship materials, applying when opportunities arise, and connecting with researchers who sit at the crossroads of AI, policy, and society. (mila.quebec)

Montréal’s Mila continues to position itself as a hub where AI policy questions meet practical, implementable policy instruments. As the field evolves, the collaboration between researchers, policymakers, and civil society will be essential to ensuring AI technologies advance in ways that are safe, fair, and beneficial for all.